00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2250 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3513 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.112 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.113 The recommended git tool is: git 00:00:00.113 using credential 00000000-0000-0000-0000-000000000002 00:00:00.115 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.170 Fetching changes from the remote Git repository 00:00:00.173 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.227 Using shallow fetch with depth 1 00:00:00.227 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.227 > git --version # timeout=10 00:00:00.281 > git --version # 'git version 2.39.2' 00:00:00.281 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.314 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.314 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:15.474 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:15.486 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:15.499 Checking out Revision 1913354106d3abc3c9aeb027a32277f58731b4dc (FETCH_HEAD) 00:00:15.499 > git config core.sparsecheckout # timeout=10 00:00:15.513 > git read-tree -mu HEAD # timeout=10 00:00:15.529 > git checkout -f 1913354106d3abc3c9aeb027a32277f58731b4dc # timeout=5 00:00:15.549 Commit message: "jenkins: update jenkins to 2.462.2 and update plugins to its latest versions" 00:00:15.550 > git rev-list --no-walk 1913354106d3abc3c9aeb027a32277f58731b4dc # timeout=10 00:00:15.628 [Pipeline] Start of Pipeline 00:00:15.645 [Pipeline] library 00:00:15.648 Loading library shm_lib@master 00:00:15.648 Library shm_lib@master is cached. Copying from home. 00:00:15.668 [Pipeline] node 00:00:15.677 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:15.679 [Pipeline] { 00:00:15.690 [Pipeline] catchError 00:00:15.691 [Pipeline] { 00:00:15.703 [Pipeline] wrap 00:00:15.711 [Pipeline] { 00:00:15.719 [Pipeline] stage 00:00:15.721 [Pipeline] { (Prologue) 00:00:15.935 [Pipeline] sh 00:00:16.220 + logger -p user.info -t JENKINS-CI 00:00:16.239 [Pipeline] echo 00:00:16.241 Node: WFP4 00:00:16.250 [Pipeline] sh 00:00:16.550 [Pipeline] setCustomBuildProperty 00:00:16.564 [Pipeline] echo 00:00:16.566 Cleanup processes 00:00:16.572 [Pipeline] sh 00:00:16.858 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:16.858 1749271 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:16.872 [Pipeline] sh 00:00:17.157 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:17.157 ++ grep -v 'sudo pgrep' 00:00:17.157 ++ awk '{print $1}' 00:00:17.157 + sudo kill -9 00:00:17.157 + true 00:00:17.174 [Pipeline] cleanWs 00:00:17.186 [WS-CLEANUP] Deleting project workspace... 00:00:17.186 [WS-CLEANUP] Deferred wipeout is used... 00:00:17.193 [WS-CLEANUP] done 00:00:17.199 [Pipeline] setCustomBuildProperty 00:00:17.216 [Pipeline] sh 00:00:17.500 + sudo git config --global --replace-all safe.directory '*' 00:00:17.593 [Pipeline] httpRequest 00:00:17.977 [Pipeline] echo 00:00:17.979 Sorcerer 10.211.164.101 is alive 00:00:17.988 [Pipeline] retry 00:00:17.990 [Pipeline] { 00:00:18.004 [Pipeline] httpRequest 00:00:18.009 HttpMethod: GET 00:00:18.009 URL: http://10.211.164.101/packages/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:18.010 Sending request to url: http://10.211.164.101/packages/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:18.031 Response Code: HTTP/1.1 200 OK 00:00:18.032 Success: Status code 200 is in the accepted range: 200,404 00:00:18.032 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:38.278 [Pipeline] } 00:00:38.297 [Pipeline] // retry 00:00:38.306 [Pipeline] sh 00:00:38.591 + tar --no-same-owner -xf jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:38.608 [Pipeline] httpRequest 00:00:39.270 [Pipeline] echo 00:00:39.271 Sorcerer 10.211.164.101 is alive 00:00:39.283 [Pipeline] retry 00:00:39.285 [Pipeline] { 00:00:39.302 [Pipeline] httpRequest 00:00:39.307 HttpMethod: GET 00:00:39.307 URL: http://10.211.164.101/packages/spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:00:39.308 Sending request to url: http://10.211.164.101/packages/spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:00:39.319 Response Code: HTTP/1.1 200 OK 00:00:39.319 Success: Status code 200 is in the accepted range: 200,404 00:00:39.320 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:01:13.892 [Pipeline] } 00:01:13.917 [Pipeline] // retry 00:01:13.925 [Pipeline] sh 00:01:14.210 + tar --no-same-owner -xf spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:01:16.761 [Pipeline] sh 00:01:17.046 + git -C spdk log --oneline -n5 00:01:17.046 3950cd1bb bdev/nvme: Change spdk_bdev_reset() to succeed if at least one nvme_ctrlr is reconnected 00:01:17.046 f9141d271 test/blob: Add BLOCKLEN macro in blob_ut 00:01:17.046 82c46626a lib/event: implement scheduler trace events 00:01:17.046 fa6aec495 lib/thread: register thread owner type for scheduler trace events 00:01:17.046 1876d41a3 include/spdk_internal: define scheduler tracegroup and tracepoints 00:01:17.065 [Pipeline] withCredentials 00:01:17.077 > git --version # timeout=10 00:01:17.090 > git --version # 'git version 2.39.2' 00:01:17.105 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:17.106 [Pipeline] { 00:01:17.115 [Pipeline] retry 00:01:17.117 [Pipeline] { 00:01:17.131 [Pipeline] sh 00:01:17.410 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:17.682 [Pipeline] } 00:01:17.701 [Pipeline] // retry 00:01:17.706 [Pipeline] } 00:01:17.724 [Pipeline] // withCredentials 00:01:17.734 [Pipeline] httpRequest 00:01:18.152 [Pipeline] echo 00:01:18.154 Sorcerer 10.211.164.101 is alive 00:01:18.164 [Pipeline] retry 00:01:18.166 [Pipeline] { 00:01:18.181 [Pipeline] httpRequest 00:01:18.186 HttpMethod: GET 00:01:18.186 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:18.186 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:18.194 Response Code: HTTP/1.1 200 OK 00:01:18.194 Success: Status code 200 is in the accepted range: 200,404 00:01:18.195 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:47.753 [Pipeline] } 00:01:47.770 [Pipeline] // retry 00:01:47.777 [Pipeline] sh 00:01:48.059 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:49.450 [Pipeline] sh 00:01:49.733 + git -C dpdk log --oneline -n5 00:01:49.733 caf0f5d395 version: 22.11.4 00:01:49.733 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:49.733 dc9c799c7d vhost: fix missing spinlock unlock 00:01:49.733 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:49.733 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:49.743 [Pipeline] } 00:01:49.757 [Pipeline] // stage 00:01:49.767 [Pipeline] stage 00:01:49.769 [Pipeline] { (Prepare) 00:01:49.789 [Pipeline] writeFile 00:01:49.805 [Pipeline] sh 00:01:50.088 + logger -p user.info -t JENKINS-CI 00:01:50.099 [Pipeline] sh 00:01:50.380 + logger -p user.info -t JENKINS-CI 00:01:50.391 [Pipeline] sh 00:01:50.672 + cat autorun-spdk.conf 00:01:50.672 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.672 SPDK_TEST_NVMF=1 00:01:50.672 SPDK_TEST_NVME_CLI=1 00:01:50.672 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:50.672 SPDK_TEST_NVMF_NICS=e810 00:01:50.672 SPDK_TEST_VFIOUSER=1 00:01:50.672 SPDK_RUN_UBSAN=1 00:01:50.672 NET_TYPE=phy 00:01:50.672 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:50.673 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:50.679 RUN_NIGHTLY=1 00:01:50.684 [Pipeline] readFile 00:01:50.707 [Pipeline] withEnv 00:01:50.709 [Pipeline] { 00:01:50.721 [Pipeline] sh 00:01:51.003 + set -ex 00:01:51.003 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:51.003 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:51.003 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.003 ++ SPDK_TEST_NVMF=1 00:01:51.003 ++ SPDK_TEST_NVME_CLI=1 00:01:51.003 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:51.003 ++ SPDK_TEST_NVMF_NICS=e810 00:01:51.003 ++ SPDK_TEST_VFIOUSER=1 00:01:51.003 ++ SPDK_RUN_UBSAN=1 00:01:51.003 ++ NET_TYPE=phy 00:01:51.003 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:51.003 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:51.003 ++ RUN_NIGHTLY=1 00:01:51.003 + case $SPDK_TEST_NVMF_NICS in 00:01:51.003 + DRIVERS=ice 00:01:51.003 + [[ tcp == \r\d\m\a ]] 00:01:51.003 + [[ -n ice ]] 00:01:51.003 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:51.003 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:51.003 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:51.003 rmmod: ERROR: Module i40iw is not currently loaded 00:01:51.003 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:51.003 + true 00:01:51.003 + for D in $DRIVERS 00:01:51.003 + sudo modprobe ice 00:01:51.003 + exit 0 00:01:51.010 [Pipeline] } 00:01:51.018 [Pipeline] // withEnv 00:01:51.021 [Pipeline] } 00:01:51.032 [Pipeline] // stage 00:01:51.037 [Pipeline] catchError 00:01:51.038 [Pipeline] { 00:01:51.049 [Pipeline] timeout 00:01:51.049 Timeout set to expire in 1 hr 0 min 00:01:51.050 [Pipeline] { 00:01:51.063 [Pipeline] stage 00:01:51.065 [Pipeline] { (Tests) 00:01:51.076 [Pipeline] sh 00:01:51.358 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:51.358 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:51.358 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:51.358 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:51.358 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:51.358 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:51.358 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:51.358 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:51.358 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:51.358 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:51.358 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:51.358 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:51.358 + source /etc/os-release 00:01:51.358 ++ NAME='Fedora Linux' 00:01:51.358 ++ VERSION='39 (Cloud Edition)' 00:01:51.358 ++ ID=fedora 00:01:51.358 ++ VERSION_ID=39 00:01:51.358 ++ VERSION_CODENAME= 00:01:51.358 ++ PLATFORM_ID=platform:f39 00:01:51.358 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:51.358 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:51.358 ++ LOGO=fedora-logo-icon 00:01:51.358 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:51.358 ++ HOME_URL=https://fedoraproject.org/ 00:01:51.358 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:51.358 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:51.358 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:51.358 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:51.358 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:51.358 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:51.358 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:51.358 ++ SUPPORT_END=2024-11-12 00:01:51.358 ++ VARIANT='Cloud Edition' 00:01:51.358 ++ VARIANT_ID=cloud 00:01:51.358 + uname -a 00:01:51.358 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:51.358 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:53.266 Hugepages 00:01:53.266 node hugesize free / total 00:01:53.266 node0 1048576kB 0 / 0 00:01:53.266 node0 2048kB 0 / 0 00:01:53.266 node1 1048576kB 0 / 0 00:01:53.266 node1 2048kB 0 / 0 00:01:53.266 00:01:53.266 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:53.266 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:53.266 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:53.266 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:53.266 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:53.266 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:53.266 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:53.526 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:53.526 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:53.526 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:53.526 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:53.526 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:53.526 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:53.526 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:53.526 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:53.526 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:53.526 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:53.526 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:53.526 + rm -f /tmp/spdk-ld-path 00:01:53.526 + source autorun-spdk.conf 00:01:53.526 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.526 ++ SPDK_TEST_NVMF=1 00:01:53.526 ++ SPDK_TEST_NVME_CLI=1 00:01:53.526 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:53.526 ++ SPDK_TEST_NVMF_NICS=e810 00:01:53.526 ++ SPDK_TEST_VFIOUSER=1 00:01:53.526 ++ SPDK_RUN_UBSAN=1 00:01:53.526 ++ NET_TYPE=phy 00:01:53.526 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:53.526 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:53.526 ++ RUN_NIGHTLY=1 00:01:53.526 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:53.526 + [[ -n '' ]] 00:01:53.526 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:53.526 + for M in /var/spdk/build-*-manifest.txt 00:01:53.526 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:53.526 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:53.526 + for M in /var/spdk/build-*-manifest.txt 00:01:53.526 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:53.526 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:53.526 + for M in /var/spdk/build-*-manifest.txt 00:01:53.526 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:53.526 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:53.526 ++ uname 00:01:53.526 + [[ Linux == \L\i\n\u\x ]] 00:01:53.526 + sudo dmesg -T 00:01:53.526 + sudo dmesg --clear 00:01:53.526 + dmesg_pid=1750226 00:01:53.526 + [[ Fedora Linux == FreeBSD ]] 00:01:53.526 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.526 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.526 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:53.526 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:53.526 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:53.526 + [[ -x /usr/src/fio-static/fio ]] 00:01:53.526 + sudo dmesg -Tw 00:01:53.526 + export FIO_BIN=/usr/src/fio-static/fio 00:01:53.526 + FIO_BIN=/usr/src/fio-static/fio 00:01:53.526 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:53.526 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:53.526 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:53.526 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.526 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.526 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:53.526 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.526 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.526 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:53.526 Test configuration: 00:01:53.526 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.526 SPDK_TEST_NVMF=1 00:01:53.526 SPDK_TEST_NVME_CLI=1 00:01:53.526 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:53.526 SPDK_TEST_NVMF_NICS=e810 00:01:53.526 SPDK_TEST_VFIOUSER=1 00:01:53.526 SPDK_RUN_UBSAN=1 00:01:53.526 NET_TYPE=phy 00:01:53.526 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:53.526 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:53.787 RUN_NIGHTLY=1 10:56:51 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:53.787 10:56:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:53.787 10:56:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:53.787 10:56:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:53.787 10:56:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:53.787 10:56:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:53.787 10:56:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.787 10:56:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.787 10:56:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.787 10:56:51 -- paths/export.sh@5 -- $ export PATH 00:01:53.787 10:56:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.787 10:56:51 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:53.787 10:56:51 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:53.787 10:56:51 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728205011.XXXXXX 00:01:53.787 10:56:51 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728205011.FZOFlq 00:01:53.787 10:56:51 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:53.787 10:56:51 -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']' 00:01:53.787 10:56:51 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:53.787 10:56:51 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:53.787 10:56:51 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:53.787 10:56:51 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:53.787 10:56:51 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:53.787 10:56:51 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:53.787 10:56:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.787 10:56:51 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:53.787 10:56:51 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:53.787 10:56:51 -- pm/common@17 -- $ local monitor 00:01:53.787 10:56:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.787 10:56:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.787 10:56:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.787 10:56:51 -- pm/common@21 -- $ date +%s 00:01:53.787 10:56:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.787 10:56:51 -- pm/common@21 -- $ date +%s 00:01:53.787 10:56:51 -- pm/common@25 -- $ sleep 1 00:01:53.788 10:56:51 -- pm/common@21 -- $ date +%s 00:01:53.788 10:56:51 -- pm/common@21 -- $ date +%s 00:01:53.788 10:56:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728205011 00:01:53.788 10:56:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728205011 00:01:53.788 10:56:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728205011 00:01:53.788 10:56:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728205011 00:01:53.788 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728205011_collect-cpu-load.pm.log 00:01:53.788 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728205011_collect-vmstat.pm.log 00:01:53.788 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728205011_collect-cpu-temp.pm.log 00:01:53.788 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728205011_collect-bmc-pm.bmc.pm.log 00:01:54.728 10:56:52 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:54.728 10:56:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:54.728 10:56:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:54.728 10:56:52 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.728 10:56:52 -- spdk/autobuild.sh@16 -- $ date -u 00:01:54.728 Sun Oct 6 08:56:52 AM UTC 2024 00:01:54.728 10:56:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:54.728 v25.01-pre-35-g3950cd1bb 00:01:54.728 10:56:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:54.728 10:56:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:54.728 10:56:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:54.728 10:56:52 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:54.728 10:56:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:54.728 10:56:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.728 ************************************ 00:01:54.728 START TEST ubsan 00:01:54.728 ************************************ 00:01:54.728 10:56:52 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:54.728 using ubsan 00:01:54.728 00:01:54.728 real 0m0.000s 00:01:54.728 user 0m0.000s 00:01:54.728 sys 0m0.000s 00:01:54.728 10:56:52 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:54.728 10:56:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:54.728 ************************************ 00:01:54.728 END TEST ubsan 00:01:54.728 ************************************ 00:01:54.989 10:56:52 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:54.989 10:56:52 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:54.989 10:56:52 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:54.989 10:56:52 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:54.989 10:56:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:54.989 10:56:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.989 ************************************ 00:01:54.989 START TEST build_native_dpdk 00:01:54.989 ************************************ 00:01:54.989 10:56:52 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:54.989 caf0f5d395 version: 22.11.4 00:01:54.989 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:54.989 dc9c799c7d vhost: fix missing spinlock unlock 00:01:54.989 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:54.989 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:54.989 patching file config/rte_config.h 00:01:54.989 Hunk #1 succeeded at 60 (offset 1 line). 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:54.989 patching file lib/pcapng/rte_pcapng.c 00:01:54.989 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:54.989 10:56:52 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:54.989 10:56:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:54.990 10:56:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:54.990 10:56:52 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:54.990 10:56:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:54.990 10:56:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:54.990 10:56:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:54.990 10:56:52 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:54.990 10:56:52 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:54.990 10:56:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:54.990 10:56:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:54.990 10:56:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:54.990 10:56:52 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:54.990 10:56:52 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:54.990 10:56:52 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:54.990 10:56:52 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:54.990 10:56:52 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:54.990 10:56:52 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:54.990 10:56:52 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:54.990 10:56:52 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:54.990 10:56:52 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:59.188 The Meson build system 00:01:59.188 Version: 1.5.0 00:01:59.188 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:59.188 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:59.188 Build type: native build 00:01:59.188 Program cat found: YES (/usr/bin/cat) 00:01:59.188 Project name: DPDK 00:01:59.188 Project version: 22.11.4 00:01:59.188 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:59.188 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:59.188 Host machine cpu family: x86_64 00:01:59.188 Host machine cpu: x86_64 00:01:59.188 Message: ## Building in Developer Mode ## 00:01:59.188 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.188 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:59.188 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.188 Program objdump found: YES (/usr/bin/objdump) 00:01:59.188 Program python3 found: YES (/usr/bin/python3) 00:01:59.188 Program cat found: YES (/usr/bin/cat) 00:01:59.188 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:59.188 Checking for size of "void *" : 8 00:01:59.188 Checking for size of "void *" : 8 (cached) 00:01:59.188 Library m found: YES 00:01:59.188 Library numa found: YES 00:01:59.188 Has header "numaif.h" : YES 00:01:59.188 Library fdt found: NO 00:01:59.188 Library execinfo found: NO 00:01:59.188 Has header "execinfo.h" : YES 00:01:59.188 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:59.188 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.188 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.188 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.188 Run-time dependency openssl found: YES 3.1.1 00:01:59.188 Run-time dependency libpcap found: YES 1.10.4 00:01:59.188 Has header "pcap.h" with dependency libpcap: YES 00:01:59.188 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.188 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.188 Compiler for C supports arguments -Wformat: YES 00:01:59.188 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:59.188 Compiler for C supports arguments -Wformat-security: NO 00:01:59.188 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.188 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.188 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.188 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.188 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.188 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.188 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.188 Compiler for C supports arguments -Wundef: YES 00:01:59.188 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.188 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.188 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.188 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.188 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.188 Compiler for C supports arguments -mavx512f: YES 00:01:59.188 Checking if "AVX512 checking" compiles: YES 00:01:59.188 Fetching value of define "__SSE4_2__" : 1 00:01:59.188 Fetching value of define "__AES__" : 1 00:01:59.188 Fetching value of define "__AVX__" : 1 00:01:59.188 Fetching value of define "__AVX2__" : 1 00:01:59.188 Fetching value of define "__AVX512BW__" : 1 00:01:59.188 Fetching value of define "__AVX512CD__" : 1 00:01:59.188 Fetching value of define "__AVX512DQ__" : 1 00:01:59.188 Fetching value of define "__AVX512F__" : 1 00:01:59.188 Fetching value of define "__AVX512VL__" : 1 00:01:59.188 Fetching value of define "__PCLMUL__" : 1 00:01:59.188 Fetching value of define "__RDRND__" : 1 00:01:59.188 Fetching value of define "__RDSEED__" : 1 00:01:59.188 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:59.188 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.188 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.188 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.188 Checking for function "getentropy" : YES 00:01:59.188 Message: lib/eal: Defining dependency "eal" 00:01:59.188 Message: lib/ring: Defining dependency "ring" 00:01:59.189 Message: lib/rcu: Defining dependency "rcu" 00:01:59.189 Message: lib/mempool: Defining dependency "mempool" 00:01:59.189 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.189 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.189 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.189 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:59.189 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:59.189 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:59.189 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:59.189 Compiler for C supports arguments -mpclmul: YES 00:01:59.189 Compiler for C supports arguments -maes: YES 00:01:59.189 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.189 Compiler for C supports arguments -mavx512bw: YES 00:01:59.189 Compiler for C supports arguments -mavx512dq: YES 00:01:59.189 Compiler for C supports arguments -mavx512vl: YES 00:01:59.189 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.189 Compiler for C supports arguments -mavx2: YES 00:01:59.189 Compiler for C supports arguments -mavx: YES 00:01:59.189 Message: lib/net: Defining dependency "net" 00:01:59.189 Message: lib/meter: Defining dependency "meter" 00:01:59.189 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.189 Message: lib/pci: Defining dependency "pci" 00:01:59.189 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.189 Message: lib/metrics: Defining dependency "metrics" 00:01:59.189 Message: lib/hash: Defining dependency "hash" 00:01:59.189 Message: lib/timer: Defining dependency "timer" 00:01:59.189 Fetching value of define "__AVX2__" : 1 (cached) 00:01:59.189 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.189 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:59.189 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:59.189 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:59.189 Message: lib/acl: Defining dependency "acl" 00:01:59.189 Message: lib/bbdev: Defining dependency "bbdev" 00:01:59.189 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:59.189 Run-time dependency libelf found: YES 0.191 00:01:59.189 Message: lib/bpf: Defining dependency "bpf" 00:01:59.189 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:59.189 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.189 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.189 Message: lib/distributor: Defining dependency "distributor" 00:01:59.189 Message: lib/efd: Defining dependency "efd" 00:01:59.189 Message: lib/eventdev: Defining dependency "eventdev" 00:01:59.189 Message: lib/gpudev: Defining dependency "gpudev" 00:01:59.189 Message: lib/gro: Defining dependency "gro" 00:01:59.189 Message: lib/gso: Defining dependency "gso" 00:01:59.189 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:59.189 Message: lib/jobstats: Defining dependency "jobstats" 00:01:59.189 Message: lib/latencystats: Defining dependency "latencystats" 00:01:59.189 Message: lib/lpm: Defining dependency "lpm" 00:01:59.189 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.189 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:59.189 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:59.189 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:59.189 Message: lib/member: Defining dependency "member" 00:01:59.189 Message: lib/pcapng: Defining dependency "pcapng" 00:01:59.189 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.189 Message: lib/power: Defining dependency "power" 00:01:59.189 Message: lib/rawdev: Defining dependency "rawdev" 00:01:59.189 Message: lib/regexdev: Defining dependency "regexdev" 00:01:59.189 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.189 Message: lib/rib: Defining dependency "rib" 00:01:59.189 Message: lib/reorder: Defining dependency "reorder" 00:01:59.189 Message: lib/sched: Defining dependency "sched" 00:01:59.189 Message: lib/security: Defining dependency "security" 00:01:59.189 Message: lib/stack: Defining dependency "stack" 00:01:59.189 Has header "linux/userfaultfd.h" : YES 00:01:59.189 Message: lib/vhost: Defining dependency "vhost" 00:01:59.189 Message: lib/ipsec: Defining dependency "ipsec" 00:01:59.189 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.189 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:59.189 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:59.189 Message: lib/fib: Defining dependency "fib" 00:01:59.189 Message: lib/port: Defining dependency "port" 00:01:59.189 Message: lib/pdump: Defining dependency "pdump" 00:01:59.189 Message: lib/table: Defining dependency "table" 00:01:59.189 Message: lib/pipeline: Defining dependency "pipeline" 00:01:59.189 Message: lib/graph: Defining dependency "graph" 00:01:59.189 Message: lib/node: Defining dependency "node" 00:01:59.189 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:59.189 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:59.189 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:59.189 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:59.189 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:59.189 Compiler for C supports arguments -Wno-unused-value: YES 00:01:59.189 Compiler for C supports arguments -Wno-format: YES 00:01:59.189 Compiler for C supports arguments -Wno-format-security: YES 00:01:59.189 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:00.596 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:00.596 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:00.596 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:00.596 Fetching value of define "__AVX2__" : 1 (cached) 00:02:00.596 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:00.596 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:00.596 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.596 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:00.596 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:00.596 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:00.596 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:00.596 Configuring doxy-api.conf using configuration 00:02:00.596 Program sphinx-build found: NO 00:02:00.596 Configuring rte_build_config.h using configuration 00:02:00.596 Message: 00:02:00.596 ================= 00:02:00.596 Applications Enabled 00:02:00.596 ================= 00:02:00.596 00:02:00.596 apps: 00:02:00.596 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:00.596 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:00.596 test-security-perf, 00:02:00.596 00:02:00.596 Message: 00:02:00.596 ================= 00:02:00.596 Libraries Enabled 00:02:00.596 ================= 00:02:00.596 00:02:00.596 libs: 00:02:00.596 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:00.596 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:00.596 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:00.596 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:00.596 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:00.596 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:00.596 table, pipeline, graph, node, 00:02:00.596 00:02:00.596 Message: 00:02:00.596 =============== 00:02:00.596 Drivers Enabled 00:02:00.596 =============== 00:02:00.596 00:02:00.596 common: 00:02:00.596 00:02:00.596 bus: 00:02:00.596 pci, vdev, 00:02:00.596 mempool: 00:02:00.596 ring, 00:02:00.596 dma: 00:02:00.596 00:02:00.596 net: 00:02:00.596 i40e, 00:02:00.596 raw: 00:02:00.596 00:02:00.596 crypto: 00:02:00.596 00:02:00.596 compress: 00:02:00.596 00:02:00.596 regex: 00:02:00.596 00:02:00.596 vdpa: 00:02:00.596 00:02:00.596 event: 00:02:00.596 00:02:00.596 baseband: 00:02:00.596 00:02:00.596 gpu: 00:02:00.596 00:02:00.596 00:02:00.596 Message: 00:02:00.596 ================= 00:02:00.596 Content Skipped 00:02:00.596 ================= 00:02:00.596 00:02:00.596 apps: 00:02:00.596 00:02:00.596 libs: 00:02:00.596 kni: explicitly disabled via build config (deprecated lib) 00:02:00.596 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:00.596 00:02:00.596 drivers: 00:02:00.596 common/cpt: not in enabled drivers build config 00:02:00.596 common/dpaax: not in enabled drivers build config 00:02:00.596 common/iavf: not in enabled drivers build config 00:02:00.596 common/idpf: not in enabled drivers build config 00:02:00.596 common/mvep: not in enabled drivers build config 00:02:00.596 common/octeontx: not in enabled drivers build config 00:02:00.596 bus/auxiliary: not in enabled drivers build config 00:02:00.596 bus/dpaa: not in enabled drivers build config 00:02:00.596 bus/fslmc: not in enabled drivers build config 00:02:00.596 bus/ifpga: not in enabled drivers build config 00:02:00.596 bus/vmbus: not in enabled drivers build config 00:02:00.596 common/cnxk: not in enabled drivers build config 00:02:00.596 common/mlx5: not in enabled drivers build config 00:02:00.596 common/qat: not in enabled drivers build config 00:02:00.596 common/sfc_efx: not in enabled drivers build config 00:02:00.596 mempool/bucket: not in enabled drivers build config 00:02:00.596 mempool/cnxk: not in enabled drivers build config 00:02:00.596 mempool/dpaa: not in enabled drivers build config 00:02:00.596 mempool/dpaa2: not in enabled drivers build config 00:02:00.596 mempool/octeontx: not in enabled drivers build config 00:02:00.596 mempool/stack: not in enabled drivers build config 00:02:00.596 dma/cnxk: not in enabled drivers build config 00:02:00.596 dma/dpaa: not in enabled drivers build config 00:02:00.596 dma/dpaa2: not in enabled drivers build config 00:02:00.596 dma/hisilicon: not in enabled drivers build config 00:02:00.596 dma/idxd: not in enabled drivers build config 00:02:00.596 dma/ioat: not in enabled drivers build config 00:02:00.596 dma/skeleton: not in enabled drivers build config 00:02:00.596 net/af_packet: not in enabled drivers build config 00:02:00.596 net/af_xdp: not in enabled drivers build config 00:02:00.596 net/ark: not in enabled drivers build config 00:02:00.596 net/atlantic: not in enabled drivers build config 00:02:00.596 net/avp: not in enabled drivers build config 00:02:00.596 net/axgbe: not in enabled drivers build config 00:02:00.596 net/bnx2x: not in enabled drivers build config 00:02:00.596 net/bnxt: not in enabled drivers build config 00:02:00.596 net/bonding: not in enabled drivers build config 00:02:00.596 net/cnxk: not in enabled drivers build config 00:02:00.596 net/cxgbe: not in enabled drivers build config 00:02:00.596 net/dpaa: not in enabled drivers build config 00:02:00.596 net/dpaa2: not in enabled drivers build config 00:02:00.596 net/e1000: not in enabled drivers build config 00:02:00.596 net/ena: not in enabled drivers build config 00:02:00.596 net/enetc: not in enabled drivers build config 00:02:00.596 net/enetfec: not in enabled drivers build config 00:02:00.596 net/enic: not in enabled drivers build config 00:02:00.596 net/failsafe: not in enabled drivers build config 00:02:00.596 net/fm10k: not in enabled drivers build config 00:02:00.596 net/gve: not in enabled drivers build config 00:02:00.597 net/hinic: not in enabled drivers build config 00:02:00.597 net/hns3: not in enabled drivers build config 00:02:00.597 net/iavf: not in enabled drivers build config 00:02:00.597 net/ice: not in enabled drivers build config 00:02:00.597 net/idpf: not in enabled drivers build config 00:02:00.597 net/igc: not in enabled drivers build config 00:02:00.597 net/ionic: not in enabled drivers build config 00:02:00.597 net/ipn3ke: not in enabled drivers build config 00:02:00.597 net/ixgbe: not in enabled drivers build config 00:02:00.597 net/kni: not in enabled drivers build config 00:02:00.597 net/liquidio: not in enabled drivers build config 00:02:00.597 net/mana: not in enabled drivers build config 00:02:00.597 net/memif: not in enabled drivers build config 00:02:00.597 net/mlx4: not in enabled drivers build config 00:02:00.597 net/mlx5: not in enabled drivers build config 00:02:00.597 net/mvneta: not in enabled drivers build config 00:02:00.597 net/mvpp2: not in enabled drivers build config 00:02:00.597 net/netvsc: not in enabled drivers build config 00:02:00.597 net/nfb: not in enabled drivers build config 00:02:00.597 net/nfp: not in enabled drivers build config 00:02:00.597 net/ngbe: not in enabled drivers build config 00:02:00.597 net/null: not in enabled drivers build config 00:02:00.597 net/octeontx: not in enabled drivers build config 00:02:00.597 net/octeon_ep: not in enabled drivers build config 00:02:00.597 net/pcap: not in enabled drivers build config 00:02:00.597 net/pfe: not in enabled drivers build config 00:02:00.597 net/qede: not in enabled drivers build config 00:02:00.597 net/ring: not in enabled drivers build config 00:02:00.597 net/sfc: not in enabled drivers build config 00:02:00.597 net/softnic: not in enabled drivers build config 00:02:00.597 net/tap: not in enabled drivers build config 00:02:00.597 net/thunderx: not in enabled drivers build config 00:02:00.597 net/txgbe: not in enabled drivers build config 00:02:00.597 net/vdev_netvsc: not in enabled drivers build config 00:02:00.597 net/vhost: not in enabled drivers build config 00:02:00.597 net/virtio: not in enabled drivers build config 00:02:00.597 net/vmxnet3: not in enabled drivers build config 00:02:00.597 raw/cnxk_bphy: not in enabled drivers build config 00:02:00.597 raw/cnxk_gpio: not in enabled drivers build config 00:02:00.597 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:00.597 raw/ifpga: not in enabled drivers build config 00:02:00.597 raw/ntb: not in enabled drivers build config 00:02:00.597 raw/skeleton: not in enabled drivers build config 00:02:00.597 crypto/armv8: not in enabled drivers build config 00:02:00.597 crypto/bcmfs: not in enabled drivers build config 00:02:00.597 crypto/caam_jr: not in enabled drivers build config 00:02:00.597 crypto/ccp: not in enabled drivers build config 00:02:00.597 crypto/cnxk: not in enabled drivers build config 00:02:00.597 crypto/dpaa_sec: not in enabled drivers build config 00:02:00.597 crypto/dpaa2_sec: not in enabled drivers build config 00:02:00.597 crypto/ipsec_mb: not in enabled drivers build config 00:02:00.597 crypto/mlx5: not in enabled drivers build config 00:02:00.597 crypto/mvsam: not in enabled drivers build config 00:02:00.597 crypto/nitrox: not in enabled drivers build config 00:02:00.597 crypto/null: not in enabled drivers build config 00:02:00.597 crypto/octeontx: not in enabled drivers build config 00:02:00.597 crypto/openssl: not in enabled drivers build config 00:02:00.597 crypto/scheduler: not in enabled drivers build config 00:02:00.597 crypto/uadk: not in enabled drivers build config 00:02:00.597 crypto/virtio: not in enabled drivers build config 00:02:00.597 compress/isal: not in enabled drivers build config 00:02:00.597 compress/mlx5: not in enabled drivers build config 00:02:00.597 compress/octeontx: not in enabled drivers build config 00:02:00.597 compress/zlib: not in enabled drivers build config 00:02:00.597 regex/mlx5: not in enabled drivers build config 00:02:00.597 regex/cn9k: not in enabled drivers build config 00:02:00.597 vdpa/ifc: not in enabled drivers build config 00:02:00.597 vdpa/mlx5: not in enabled drivers build config 00:02:00.597 vdpa/sfc: not in enabled drivers build config 00:02:00.597 event/cnxk: not in enabled drivers build config 00:02:00.597 event/dlb2: not in enabled drivers build config 00:02:00.597 event/dpaa: not in enabled drivers build config 00:02:00.597 event/dpaa2: not in enabled drivers build config 00:02:00.597 event/dsw: not in enabled drivers build config 00:02:00.597 event/opdl: not in enabled drivers build config 00:02:00.597 event/skeleton: not in enabled drivers build config 00:02:00.597 event/sw: not in enabled drivers build config 00:02:00.597 event/octeontx: not in enabled drivers build config 00:02:00.597 baseband/acc: not in enabled drivers build config 00:02:00.597 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:00.597 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:00.597 baseband/la12xx: not in enabled drivers build config 00:02:00.597 baseband/null: not in enabled drivers build config 00:02:00.597 baseband/turbo_sw: not in enabled drivers build config 00:02:00.597 gpu/cuda: not in enabled drivers build config 00:02:00.597 00:02:00.597 00:02:00.597 Build targets in project: 311 00:02:00.597 00:02:00.597 DPDK 22.11.4 00:02:00.597 00:02:00.597 User defined options 00:02:00.597 libdir : lib 00:02:00.597 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:00.597 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:00.597 c_link_args : 00:02:00.597 enable_docs : false 00:02:00.597 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:00.597 enable_kmods : false 00:02:00.597 machine : native 00:02:00.597 tests : false 00:02:00.597 00:02:00.597 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.597 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:00.597 10:56:57 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:02:00.597 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:00.597 [1/740] Generating lib/rte_telemetry_def with a custom command 00:02:00.597 [2/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:00.597 [3/740] Generating lib/rte_kvargs_def with a custom command 00:02:00.597 [4/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:00.597 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:00.597 [6/740] Generating lib/rte_eal_def with a custom command 00:02:00.597 [7/740] Generating lib/rte_eal_mingw with a custom command 00:02:00.597 [8/740] Generating lib/rte_rcu_def with a custom command 00:02:00.597 [9/740] Generating lib/rte_rcu_mingw with a custom command 00:02:00.597 [10/740] Generating lib/rte_mbuf_def with a custom command 00:02:00.597 [11/740] Generating lib/rte_ring_def with a custom command 00:02:00.597 [12/740] Generating lib/rte_ring_mingw with a custom command 00:02:00.597 [13/740] Generating lib/rte_mempool_mingw with a custom command 00:02:00.597 [14/740] Generating lib/rte_mempool_def with a custom command 00:02:00.597 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:00.597 [16/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:00.597 [17/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:00.597 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:00.597 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:00.597 [20/740] Generating lib/rte_meter_def with a custom command 00:02:00.597 [21/740] Generating lib/rte_net_def with a custom command 00:02:00.597 [22/740] Generating lib/rte_net_mingw with a custom command 00:02:00.597 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:00.597 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:00.598 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:00.598 [26/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:00.598 [27/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:00.598 [28/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:00.598 [29/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:00.598 [30/740] Generating lib/rte_meter_mingw with a custom command 00:02:00.598 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:00.860 [32/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:00.860 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.860 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.860 [35/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:00.860 [36/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:00.860 [37/740] Generating lib/rte_ethdev_def with a custom command 00:02:00.860 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:00.860 [39/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:00.860 [40/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:00.860 [41/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:00.860 [42/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:00.861 [43/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:00.861 [44/740] Generating lib/rte_pci_def with a custom command 00:02:00.861 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:00.861 [46/740] Generating lib/rte_pci_mingw with a custom command 00:02:00.861 [47/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:00.861 [48/740] Linking static target lib/librte_kvargs.a 00:02:00.861 [49/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:00.861 [50/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:00.861 [51/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:00.861 [52/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:00.861 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:00.861 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:00.861 [55/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:00.861 [56/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.861 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:00.861 [58/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:00.861 [59/740] Generating lib/rte_metrics_mingw with a custom command 00:02:00.861 [60/740] Generating lib/rte_cmdline_def with a custom command 00:02:00.861 [61/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:00.861 [62/740] Generating lib/rte_metrics_def with a custom command 00:02:00.861 [63/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:00.861 [64/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:00.861 [65/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:00.861 [66/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:00.861 [67/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:00.861 [68/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:00.861 [69/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:00.861 [70/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:00.861 [71/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:00.861 [72/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:00.861 [73/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:00.861 [74/740] Linking static target lib/librte_pci.a 00:02:00.861 [75/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:00.861 [76/740] Generating lib/rte_hash_def with a custom command 00:02:00.861 [77/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:00.861 [78/740] Linking static target lib/librte_meter.a 00:02:00.861 [79/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:00.861 [80/740] Generating lib/rte_hash_mingw with a custom command 00:02:00.861 [81/740] Linking static target lib/librte_ring.a 00:02:00.861 [82/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:00.861 [83/740] Generating lib/rte_timer_mingw with a custom command 00:02:00.861 [84/740] Generating lib/rte_timer_def with a custom command 00:02:00.861 [85/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:00.861 [86/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.861 [87/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:00.861 [88/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:00.861 [89/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:00.861 [90/740] Generating lib/rte_acl_mingw with a custom command 00:02:00.861 [91/740] Generating lib/rte_acl_def with a custom command 00:02:00.861 [92/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:00.861 [93/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:00.861 [94/740] Generating lib/rte_bitratestats_def with a custom command 00:02:00.861 [95/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:00.861 [96/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:00.861 [97/740] Generating lib/rte_bbdev_def with a custom command 00:02:00.861 [98/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:00.861 [99/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:00.861 [100/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:00.861 [101/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:00.861 [102/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:00.861 [103/740] Generating lib/rte_bpf_mingw with a custom command 00:02:00.861 [104/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:00.861 [105/740] Generating lib/rte_bpf_def with a custom command 00:02:00.861 [106/740] Generating lib/rte_cfgfile_def with a custom command 00:02:01.128 [107/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:01.128 [108/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:01.128 [109/740] Generating lib/rte_compressdev_def with a custom command 00:02:01.128 [110/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:01.128 [111/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:01.128 [112/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:01.128 [113/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:01.128 [114/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:01.128 [115/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:01.128 [116/740] Generating lib/rte_cryptodev_def with a custom command 00:02:01.128 [117/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:01.128 [118/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:01.128 [119/740] Generating lib/rte_distributor_mingw with a custom command 00:02:01.128 [120/740] Generating lib/rte_distributor_def with a custom command 00:02:01.128 [121/740] Generating lib/rte_efd_def with a custom command 00:02:01.128 [122/740] Generating lib/rte_efd_mingw with a custom command 00:02:01.128 [123/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:01.128 [124/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:01.128 [125/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:01.128 [126/740] Generating lib/rte_eventdev_def with a custom command 00:02:01.128 [127/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:01.128 [128/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:01.128 [129/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.128 [130/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.128 [131/740] Generating lib/rte_gpudev_def with a custom command 00:02:01.128 [132/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:01.128 [133/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.128 [134/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:01.388 [135/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:01.388 [136/740] Linking target lib/librte_kvargs.so.23.0 00:02:01.388 [137/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:01.388 [138/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:01.388 [139/740] Generating lib/rte_gro_def with a custom command 00:02:01.389 [140/740] Generating lib/rte_gro_mingw with a custom command 00:02:01.389 [141/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:01.389 [142/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:01.389 [143/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.389 [144/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:01.389 [145/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:01.389 [146/740] Generating lib/rte_gso_def with a custom command 00:02:01.389 [147/740] Linking static target lib/librte_cfgfile.a 00:02:01.389 [148/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:01.389 [149/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:01.389 [150/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:01.389 [151/740] Generating lib/rte_gso_mingw with a custom command 00:02:01.389 [152/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:01.389 [153/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:01.389 [154/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:01.389 [155/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:01.389 [156/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:01.389 [157/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:01.389 [158/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:01.389 [159/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:01.389 [160/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:01.389 [161/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:01.389 [162/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:01.389 [163/740] Linking static target lib/librte_metrics.a 00:02:01.651 [164/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:01.651 [165/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:01.651 [166/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:01.651 [167/740] Generating lib/rte_jobstats_def with a custom command 00:02:01.651 [168/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:01.651 [169/740] Generating lib/rte_ip_frag_def with a custom command 00:02:01.651 [170/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:01.651 [171/740] Linking static target lib/librte_timer.a 00:02:01.651 [172/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:01.651 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:01.651 [174/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:01.651 [175/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:01.651 [176/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:01.651 [177/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:01.651 [178/740] Generating lib/rte_latencystats_def with a custom command 00:02:01.651 [179/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:01.651 [180/740] Linking static target lib/librte_cmdline.a 00:02:01.651 [181/740] Generating lib/rte_lpm_mingw with a custom command 00:02:01.651 [182/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:01.651 [183/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:01.651 [184/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:01.651 [185/740] Generating lib/rte_lpm_def with a custom command 00:02:01.651 [186/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:01.651 [187/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:01.651 [188/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:01.651 [189/740] Generating lib/rte_member_mingw with a custom command 00:02:01.651 [190/740] Generating lib/rte_member_def with a custom command 00:02:01.651 [191/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:01.651 [192/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:01.651 [193/740] Generating lib/rte_pcapng_def with a custom command 00:02:01.651 [194/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:01.651 [195/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:01.651 [196/740] Linking static target lib/librte_bitratestats.a 00:02:01.651 [197/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:01.651 [198/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:01.651 [199/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:01.651 [200/740] Linking static target lib/librte_telemetry.a 00:02:01.651 [201/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:01.651 [202/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:01.651 [203/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:01.651 [204/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:01.651 [205/740] Generating lib/rte_power_mingw with a custom command 00:02:01.651 [206/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:01.651 [207/740] Linking static target lib/librte_jobstats.a 00:02:01.651 [208/740] Generating lib/rte_power_def with a custom command 00:02:01.651 [209/740] Generating lib/rte_rawdev_def with a custom command 00:02:01.651 [210/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:01.651 [211/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:01.651 [212/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:01.651 [213/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:01.651 [214/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:01.651 [215/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:01.651 [216/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:01.651 [217/740] Generating lib/rte_regexdev_def with a custom command 00:02:01.651 [218/740] Linking static target lib/librte_net.a 00:02:01.651 [219/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:01.651 [220/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:01.651 [221/740] Generating lib/rte_dmadev_def with a custom command 00:02:01.651 [222/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:01.651 [223/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:01.651 [224/740] Generating lib/rte_rib_mingw with a custom command 00:02:01.651 [225/740] Generating lib/rte_reorder_def with a custom command 00:02:01.651 [226/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:01.922 [227/740] Generating lib/rte_rib_def with a custom command 00:02:01.922 [228/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:01.922 [229/740] Generating lib/rte_reorder_mingw with a custom command 00:02:01.922 [230/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:01.922 [231/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:01.922 [232/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:01.922 [233/740] Generating lib/rte_sched_def with a custom command 00:02:01.922 [234/740] Generating lib/rte_sched_mingw with a custom command 00:02:01.922 [235/740] Generating lib/rte_security_def with a custom command 00:02:01.922 [236/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:01.923 [237/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:01.923 [238/740] Generating lib/rte_security_mingw with a custom command 00:02:01.923 [239/740] Generating lib/rte_stack_def with a custom command 00:02:01.923 [240/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:01.923 [241/740] Generating lib/rte_stack_mingw with a custom command 00:02:01.923 [242/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:01.923 [243/740] Linking static target lib/librte_compressdev.a 00:02:01.923 [244/740] Generating lib/rte_vhost_def with a custom command 00:02:01.923 [245/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:01.923 [246/740] Generating lib/rte_vhost_mingw with a custom command 00:02:01.923 [247/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:01.923 [248/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:01.923 [249/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:01.923 [250/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:01.923 [251/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:01.923 [252/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.923 [253/740] Linking static target lib/librte_rcu.a 00:02:01.923 [254/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:01.923 [255/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:01.923 [256/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:01.923 [257/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:01.923 [258/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:01.923 [259/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:01.923 [260/740] Linking static target lib/librte_stack.a 00:02:01.923 [261/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:01.923 [262/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.923 [263/740] Generating lib/rte_ipsec_def with a custom command 00:02:01.923 [264/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:01.923 [265/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:01.923 [266/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:01.923 [267/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:02.191 [268/740] Generating lib/rte_fib_def with a custom command 00:02:02.191 [269/740] Generating lib/rte_fib_mingw with a custom command 00:02:02.191 [270/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:02.191 [271/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:02.191 [272/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:02.191 [273/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:02.191 [274/740] Linking static target lib/librte_bbdev.a 00:02:02.191 [275/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.191 [276/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:02.191 [277/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:02.191 [278/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:02.191 [279/740] Linking static target lib/librte_rawdev.a 00:02:02.191 [280/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.191 [281/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:02.191 [282/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:02.191 [283/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.191 [284/740] Linking static target lib/librte_mempool.a 00:02:02.191 [285/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:02.191 [286/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:02.191 [287/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.191 [288/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.191 [289/740] Generating lib/rte_port_def with a custom command 00:02:02.191 [290/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:02.191 [291/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:02.191 [292/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:02.191 [293/740] Generating lib/rte_port_mingw with a custom command 00:02:02.191 [294/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:02.191 [295/740] Linking static target lib/librte_dmadev.a 00:02:02.191 [296/740] Generating lib/rte_pdump_mingw with a custom command 00:02:02.191 [297/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:02.191 [298/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:02.454 [299/740] Generating lib/rte_pdump_def with a custom command 00:02:02.454 [300/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:02.454 [301/740] Linking target lib/librte_telemetry.so.23.0 00:02:02.454 [302/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:02.454 [303/740] Linking static target lib/librte_latencystats.a 00:02:02.454 [304/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:02.454 [305/740] Linking static target lib/librte_gpudev.a 00:02:02.454 [306/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:02.454 [307/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:02.454 [308/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:02.454 [309/740] Linking static target lib/librte_gso.a 00:02:02.454 [310/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.454 [311/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:02.454 [312/740] Linking static target lib/librte_gro.a 00:02:02.454 [313/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:02.454 [314/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:02.454 [315/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:02.454 [316/740] Linking static target lib/librte_distributor.a 00:02:02.454 [317/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:02.454 [318/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:02.454 [319/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:02.454 [320/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.454 [321/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:02.454 [322/740] Generating lib/rte_table_def with a custom command 00:02:02.454 [323/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:02.454 [324/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:02.454 [325/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:02.715 [326/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:02.715 [327/740] Generating lib/rte_table_mingw with a custom command 00:02:02.715 [328/740] Linking static target lib/librte_regexdev.a 00:02:02.715 [329/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:02.715 [330/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:02.715 [331/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:02.715 [332/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:02.715 [333/740] Linking static target lib/librte_mbuf.a 00:02:02.715 [334/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:02.715 [335/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:02.715 [336/740] Generating lib/rte_pipeline_def with a custom command 00:02:02.715 [337/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:02.715 [338/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:02.715 [339/740] Linking static target lib/librte_reorder.a 00:02:02.715 [340/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.715 [341/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:02.715 [342/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:02.715 [343/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:02.715 [344/740] Linking static target lib/librte_security.a 00:02:02.715 [345/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.715 [346/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:02.977 [347/740] Generating lib/rte_graph_def with a custom command 00:02:02.977 [348/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.977 [349/740] Generating lib/rte_graph_mingw with a custom command 00:02:02.977 [350/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.977 [351/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:02.977 [352/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:02.977 [353/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:02.977 [354/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:02.977 [355/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:02.977 [356/740] Linking static target lib/librte_ip_frag.a 00:02:02.977 [357/740] Linking static target lib/librte_eal.a 00:02:02.977 [358/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:02.977 [359/740] Linking static target lib/librte_power.a 00:02:02.977 [360/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:02.977 [361/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:02.977 [362/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.977 [363/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.977 [364/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:02.977 [365/740] Generating lib/rte_node_def with a custom command 00:02:02.977 [366/740] Generating lib/rte_node_mingw with a custom command 00:02:02.977 [367/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:02.977 [368/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:02.977 [369/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:02.977 [370/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:02.977 [371/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:02.977 [372/740] Linking static target lib/librte_pcapng.a 00:02:02.977 [373/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.977 [374/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:03.242 [375/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:03.242 [376/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:03.242 [377/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:03.242 [378/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:03.242 [379/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:03.242 [380/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:03.242 [381/740] Linking static target lib/librte_bpf.a 00:02:03.242 [382/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:03.242 [383/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:03.242 [384/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:03.242 [385/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:03.242 [386/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:03.242 [387/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.242 [388/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:03.242 [389/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:03.242 [390/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:03.242 [391/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:03.242 [392/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:03.242 [393/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.242 [394/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:03.242 [395/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:03.242 [396/740] Linking static target lib/librte_lpm.a 00:02:03.242 [397/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:03.242 [398/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:03.242 [399/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:03.242 [400/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:03.242 [401/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.242 [402/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:03.512 [403/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:03.512 [404/740] Linking static target lib/librte_rib.a 00:02:03.512 [405/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:03.513 [406/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:03.513 [407/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:03.513 [408/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:03.513 [409/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:03.513 [410/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:03.513 [411/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:03.513 [412/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:03.513 [413/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:03.513 [414/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:03.513 [415/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.513 [416/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:03.513 [417/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.513 [418/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:03.513 [419/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:03.513 [420/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.513 [421/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:03.513 [422/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:03.513 [423/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:03.513 [424/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:03.513 [425/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.513 [426/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:03.513 [427/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:03.513 [428/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:03.513 [429/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.513 [430/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:03.773 [431/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:03.773 [432/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:03.773 [433/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:03.773 [434/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:03.773 [435/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.773 [436/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:03.773 [437/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:03.773 [438/740] Linking static target lib/librte_efd.a 00:02:03.773 [439/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:03.773 [440/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:03.773 [441/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:03.773 [442/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:03.773 [443/740] Linking static target lib/librte_graph.a 00:02:03.773 [444/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.773 [445/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:03.773 [446/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:03.773 [447/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:03.773 [448/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.036 [449/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:04.036 [450/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:04.036 [451/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:04.036 [452/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:04.036 [453/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:04.036 [454/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.036 [455/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.036 [456/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:04.036 [457/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:04.036 [458/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:04.036 [459/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:04.036 [460/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:04.036 [461/740] Linking static target lib/librte_fib.a 00:02:04.036 [462/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:04.036 [463/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.036 [464/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.036 [465/740] Linking static target drivers/librte_bus_vdev.a 00:02:04.036 [466/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:04.036 [467/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.036 [468/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:04.298 [469/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:04.298 [470/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.298 [471/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:04.298 [472/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:04.298 [473/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:04.298 [474/740] Linking static target lib/librte_pdump.a 00:02:04.298 [475/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:04.298 [476/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:04.298 [477/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:04.298 [478/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.565 [479/740] Linking static target drivers/librte_bus_pci.a 00:02:04.565 [480/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:04.565 [481/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.565 [482/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:04.565 [483/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:04.565 [484/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:04.565 [485/740] Linking static target lib/librte_table.a 00:02:04.565 [486/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:04.565 [487/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:04.565 [488/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:04.565 [489/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:04.565 [490/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:04.565 [491/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:04.565 [492/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:04.565 [493/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.565 [494/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:04.565 [495/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.565 [496/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:04.825 [497/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.825 [498/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:04.825 [499/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:04.825 [500/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:04.825 [501/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:04.825 [502/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:04.825 [503/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:04.825 [504/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:04.825 [505/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.825 [506/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:04.825 [507/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:04.825 [508/740] Linking static target lib/librte_cryptodev.a 00:02:04.825 [509/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:05.085 [510/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:05.085 [511/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:05.085 [512/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:05.085 [513/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:05.085 [514/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:05.085 [515/740] Linking static target lib/librte_ipsec.a 00:02:05.085 [516/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:05.085 [517/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:05.085 [518/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:05.085 [519/740] Linking static target lib/librte_ethdev.a 00:02:05.085 [520/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:05.085 [521/740] Linking static target lib/librte_sched.a 00:02:05.085 [522/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:05.085 [523/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:05.085 [524/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:05.085 [525/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:05.085 [526/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:05.085 [527/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:05.085 [528/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:05.085 [529/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:05.085 [530/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:05.085 [531/740] Linking static target lib/librte_node.a 00:02:05.085 [532/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:05.085 [533/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:05.085 [534/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:05.085 [535/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:05.085 [536/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:05.085 [537/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:05.085 [538/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.085 [539/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:05.085 [540/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:05.347 [541/740] Linking static target lib/librte_member.a 00:02:05.347 [542/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:05.347 [543/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:05.347 [544/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:05.347 [545/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:05.347 [546/740] Linking static target lib/librte_port.a 00:02:05.347 [547/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:05.347 [548/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:05.347 [549/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.347 [550/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:05.347 [551/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:05.347 [552/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:05.347 [553/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:05.347 [554/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:05.347 [555/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:05.347 [556/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:05.347 [557/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.605 [558/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.605 [559/740] Linking static target drivers/librte_mempool_ring.a 00:02:05.605 [560/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.605 [561/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:05.605 [562/740] Linking static target lib/librte_hash.a 00:02:05.605 [563/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:05.605 [564/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.605 [565/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:05.605 [566/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:05.605 [567/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.605 [568/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.605 [569/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:05.605 [570/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:05.605 [571/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:05.605 [572/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:05.605 [573/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.605 [574/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:05.605 [575/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:05.605 [576/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:05.605 [577/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:05.605 [578/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:05.605 [579/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:05.605 [580/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:05.605 [581/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:05.605 [582/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:05.605 [583/740] Linking static target lib/librte_eventdev.a 00:02:05.865 [584/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:05.865 [585/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:05.865 [586/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:05.865 [587/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:05.865 [588/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:05.865 [589/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:05.865 [590/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:05.865 [591/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:05.865 [592/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:05.865 [593/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:05.865 [594/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:06.124 [595/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:06.124 [596/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:06.124 [597/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:06.124 [598/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.124 [599/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:06.124 [600/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:06.124 [601/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:06.124 [602/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:06.124 [603/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:06.124 [604/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:06.124 [605/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:06.382 [606/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:06.382 [607/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:06.383 [608/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.383 [609/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:06.383 [610/740] Linking static target lib/librte_acl.a 00:02:06.383 [611/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:06.640 [612/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:06.640 [613/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:06.640 [614/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:06.897 [615/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.897 [616/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:07.154 [617/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:07.412 [618/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:07.670 [619/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:07.670 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:07.928 [621/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.928 [622/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:08.495 [623/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.495 [624/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:08.495 [625/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:08.754 [626/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:08.754 [627/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:09.014 [628/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:09.014 [629/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:09.014 [630/740] Linking static target drivers/librte_net_i40e.a 00:02:09.581 [631/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:09.841 [632/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:09.841 [633/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.378 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.285 [635/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.285 [636/740] Linking target lib/librte_eal.so.23.0 00:02:14.544 [637/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:14.544 [638/740] Linking target lib/librte_pci.so.23.0 00:02:14.544 [639/740] Linking target lib/librte_rawdev.so.23.0 00:02:14.544 [640/740] Linking target lib/librte_timer.so.23.0 00:02:14.544 [641/740] Linking target lib/librte_ring.so.23.0 00:02:14.544 [642/740] Linking target lib/librte_meter.so.23.0 00:02:14.544 [643/740] Linking target lib/librte_jobstats.so.23.0 00:02:14.544 [644/740] Linking target lib/librte_graph.so.23.0 00:02:14.544 [645/740] Linking target lib/librte_cfgfile.so.23.0 00:02:14.544 [646/740] Linking target lib/librte_stack.so.23.0 00:02:14.544 [647/740] Linking target lib/librte_dmadev.so.23.0 00:02:14.544 [648/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:14.544 [649/740] Linking target lib/librte_acl.so.23.0 00:02:14.803 [650/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:14.803 [651/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:14.803 [652/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:14.803 [653/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:14.803 [654/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:14.803 [655/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:14.803 [656/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:14.803 [657/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:14.803 [658/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:14.803 [659/740] Linking target lib/librte_rcu.so.23.0 00:02:14.803 [660/740] Linking target lib/librte_mempool.so.23.0 00:02:14.803 [661/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:14.803 [662/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:14.803 [663/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:14.803 [664/740] Linking target lib/librte_rib.so.23.0 00:02:14.803 [665/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:14.803 [666/740] Linking target lib/librte_mbuf.so.23.0 00:02:15.062 [667/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:15.063 [668/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:15.063 [669/740] Linking target lib/librte_bbdev.so.23.0 00:02:15.063 [670/740] Linking target lib/librte_gpudev.so.23.0 00:02:15.063 [671/740] Linking target lib/librte_fib.so.23.0 00:02:15.063 [672/740] Linking target lib/librte_net.so.23.0 00:02:15.063 [673/740] Linking target lib/librte_cryptodev.so.23.0 00:02:15.063 [674/740] Linking target lib/librte_compressdev.so.23.0 00:02:15.063 [675/740] Linking target lib/librte_reorder.so.23.0 00:02:15.063 [676/740] Linking target lib/librte_distributor.so.23.0 00:02:15.063 [677/740] Linking target lib/librte_regexdev.so.23.0 00:02:15.063 [678/740] Linking target lib/librte_sched.so.23.0 00:02:15.322 [679/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:15.322 [680/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:15.322 [681/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:15.322 [682/740] Linking target lib/librte_cmdline.so.23.0 00:02:15.322 [683/740] Linking target lib/librte_security.so.23.0 00:02:15.322 [684/740] Linking target lib/librte_hash.so.23.0 00:02:15.322 [685/740] Linking target lib/librte_ethdev.so.23.0 00:02:15.322 [686/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:15.322 [687/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:15.322 [688/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:15.322 [689/740] Linking target lib/librte_efd.so.23.0 00:02:15.322 [690/740] Linking target lib/librte_lpm.so.23.0 00:02:15.322 [691/740] Linking target lib/librte_member.so.23.0 00:02:15.322 [692/740] Linking target lib/librte_ipsec.so.23.0 00:02:15.582 [693/740] Linking target lib/librte_gso.so.23.0 00:02:15.582 [694/740] Linking target lib/librte_eventdev.so.23.0 00:02:15.582 [695/740] Linking target lib/librte_ip_frag.so.23.0 00:02:15.582 [696/740] Linking target lib/librte_metrics.so.23.0 00:02:15.582 [697/740] Linking target lib/librte_pcapng.so.23.0 00:02:15.582 [698/740] Linking target lib/librte_gro.so.23.0 00:02:15.582 [699/740] Linking target lib/librte_bpf.so.23.0 00:02:15.582 [700/740] Linking target lib/librte_power.so.23.0 00:02:15.582 [701/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:15.582 [702/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:15.582 [703/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:15.582 [704/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:15.582 [705/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:15.582 [706/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:15.582 [707/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:15.582 [708/740] Linking target lib/librte_node.so.23.0 00:02:15.582 [709/740] Linking target lib/librte_bitratestats.so.23.0 00:02:15.582 [710/740] Linking target lib/librte_port.so.23.0 00:02:15.582 [711/740] Linking target lib/librte_latencystats.so.23.0 00:02:15.582 [712/740] Linking target lib/librte_pdump.so.23.0 00:02:15.841 [713/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:15.841 [714/740] Linking target lib/librte_table.so.23.0 00:02:15.841 [715/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:15.841 [716/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:16.101 [717/740] Linking static target lib/librte_vhost.a 00:02:17.038 [718/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:17.038 [719/740] Linking static target lib/librte_pipeline.a 00:02:17.299 [720/740] Linking target app/dpdk-dumpcap 00:02:17.299 [721/740] Linking target app/dpdk-test-fib 00:02:17.299 [722/740] Linking target app/dpdk-test-acl 00:02:17.299 [723/740] Linking target app/dpdk-test-gpudev 00:02:17.299 [724/740] Linking target app/dpdk-test-cmdline 00:02:17.299 [725/740] Linking target app/dpdk-pdump 00:02:17.299 [726/740] Linking target app/dpdk-proc-info 00:02:17.299 [727/740] Linking target app/dpdk-test-security-perf 00:02:17.299 [728/740] Linking target app/dpdk-test-flow-perf 00:02:17.299 [729/740] Linking target app/dpdk-test-regex 00:02:17.299 [730/740] Linking target app/dpdk-test-sad 00:02:17.299 [731/740] Linking target app/dpdk-test-pipeline 00:02:17.299 [732/740] Linking target app/dpdk-test-compress-perf 00:02:17.299 [733/740] Linking target app/dpdk-test-crypto-perf 00:02:17.299 [734/740] Linking target app/dpdk-test-bbdev 00:02:17.299 [735/740] Linking target app/dpdk-test-eventdev 00:02:17.299 [736/740] Linking target app/dpdk-testpmd 00:02:17.868 [737/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.868 [738/740] Linking target lib/librte_vhost.so.23.0 00:02:21.163 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.163 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:21.163 10:57:18 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:21.163 10:57:18 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:21.163 10:57:18 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:02:21.163 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:21.163 [0/1] Installing files. 00:02:21.163 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:21.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:21.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:21.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:21.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:21.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:21.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:21.429 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.429 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.429 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.430 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.694 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.694 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.694 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.694 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.694 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.694 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.694 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.694 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.694 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.694 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:21.694 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.694 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:21.694 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.694 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:21.694 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.694 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:21.694 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.694 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.694 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.694 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.694 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.694 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.694 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.694 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.694 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.694 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.694 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.694 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.694 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.694 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.694 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.694 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.694 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:21.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:21.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:21.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:21.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:21.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:21.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:21.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.695 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.696 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.697 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:21.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:21.698 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:21.698 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:21.698 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:21.698 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:21.698 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:21.698 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:21.698 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:21.698 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:21.698 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:21.698 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:21.698 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:21.698 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:21.698 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:21.698 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:21.698 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:21.698 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:21.698 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:21.698 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:21.698 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:21.698 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:21.698 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:21.698 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:21.698 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:21.698 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:21.698 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:21.698 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:21.698 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:21.698 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:21.698 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:21.698 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:21.698 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:21.698 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:21.698 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:21.698 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:21.698 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:21.698 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:21.699 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:21.699 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:21.699 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:21.699 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:21.699 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:21.699 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:21.699 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:21.699 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:21.699 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:21.699 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:21.699 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:21.699 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:21.699 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:21.699 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:21.699 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:21.699 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:21.699 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:21.699 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:21.699 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:21.699 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:21.699 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:21.699 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:21.699 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:21.699 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:21.699 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:21.699 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:21.699 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:21.699 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:21.699 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:21.699 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:21.699 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:21.699 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:21.699 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:21.699 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:21.699 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:21.699 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:21.699 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:21.699 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:21.699 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:21.699 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:21.699 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:21.699 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:21.699 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:21.699 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:21.699 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:21.699 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:21.699 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:21.699 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:21.699 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:21.699 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:21.699 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:21.699 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:21.699 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:21.699 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:21.699 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:21.699 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:21.699 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:21.699 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:21.699 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:21.699 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:21.699 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:21.699 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:21.699 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:21.699 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:21.699 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:21.699 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:21.699 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:21.699 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:21.699 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:21.700 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:21.700 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:21.700 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:21.700 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:21.700 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:21.700 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:21.700 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:21.700 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:21.700 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:21.700 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:21.700 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:21.700 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:21.700 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:21.700 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:21.700 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:21.700 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:21.700 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:21.700 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:21.700 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:21.700 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:21.700 10:57:19 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:21.700 10:57:19 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:21.700 00:02:21.700 real 0m26.769s 00:02:21.700 user 7m44.722s 00:02:21.700 sys 1m53.150s 00:02:21.700 10:57:19 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:21.700 10:57:19 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:21.700 ************************************ 00:02:21.700 END TEST build_native_dpdk 00:02:21.700 ************************************ 00:02:21.700 10:57:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:21.700 10:57:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:21.700 10:57:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:21.700 10:57:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:21.700 10:57:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:21.700 10:57:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:21.700 10:57:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:21.700 10:57:19 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:21.700 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:21.960 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:21.960 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:21.960 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:22.528 Using 'verbs' RDMA provider 00:02:35.316 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:47.541 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:47.541 Creating mk/config.mk...done. 00:02:47.541 Creating mk/cc.flags.mk...done. 00:02:47.541 Type 'make' to build. 00:02:47.541 10:57:43 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:47.541 10:57:43 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:47.541 10:57:43 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:47.541 10:57:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.541 ************************************ 00:02:47.541 START TEST make 00:02:47.541 ************************************ 00:02:47.541 10:57:43 make -- common/autotest_common.sh@1125 -- $ make -j96 00:02:47.541 make[1]: Nothing to be done for 'all'. 00:02:48.113 The Meson build system 00:02:48.113 Version: 1.5.0 00:02:48.113 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:48.113 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:48.113 Build type: native build 00:02:48.113 Project name: libvfio-user 00:02:48.113 Project version: 0.0.1 00:02:48.113 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:48.113 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:48.113 Host machine cpu family: x86_64 00:02:48.113 Host machine cpu: x86_64 00:02:48.113 Run-time dependency threads found: YES 00:02:48.113 Library dl found: YES 00:02:48.113 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:48.113 Run-time dependency json-c found: YES 0.17 00:02:48.113 Run-time dependency cmocka found: YES 1.1.7 00:02:48.113 Program pytest-3 found: NO 00:02:48.113 Program flake8 found: NO 00:02:48.113 Program misspell-fixer found: NO 00:02:48.113 Program restructuredtext-lint found: NO 00:02:48.113 Program valgrind found: YES (/usr/bin/valgrind) 00:02:48.113 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:48.113 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:48.113 Compiler for C supports arguments -Wwrite-strings: YES 00:02:48.113 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:48.113 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:48.113 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:48.113 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:48.113 Build targets in project: 8 00:02:48.113 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:48.113 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:48.113 00:02:48.113 libvfio-user 0.0.1 00:02:48.113 00:02:48.113 User defined options 00:02:48.113 buildtype : debug 00:02:48.113 default_library: shared 00:02:48.113 libdir : /usr/local/lib 00:02:48.113 00:02:48.113 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:48.681 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:48.681 [1/37] Compiling C object samples/null.p/null.c.o 00:02:48.681 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:48.681 [3/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:48.681 [4/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:48.681 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:48.681 [6/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:48.681 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:48.681 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:48.681 [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:48.681 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:48.681 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:48.681 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:48.681 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:48.681 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:48.681 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:48.681 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:48.681 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:48.681 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:48.681 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:48.681 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:48.681 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:48.681 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:48.681 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:48.681 [24/37] Compiling C object samples/server.p/server.c.o 00:02:48.681 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:48.681 [26/37] Compiling C object samples/client.p/client.c.o 00:02:48.940 [27/37] Linking target samples/client 00:02:48.940 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:48.940 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:48.940 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:48.940 [31/37] Linking target test/unit_tests 00:02:48.940 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:48.940 [33/37] Linking target samples/gpio-pci-idio-16 00:02:48.940 [34/37] Linking target samples/null 00:02:48.940 [35/37] Linking target samples/lspci 00:02:48.940 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:48.940 [37/37] Linking target samples/server 00:02:48.940 INFO: autodetecting backend as ninja 00:02:48.940 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:49.199 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:49.458 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:49.458 ninja: no work to do. 00:03:16.014 CC lib/ut_mock/mock.o 00:03:16.014 CC lib/ut/ut.o 00:03:16.014 CC lib/log/log.o 00:03:16.014 CC lib/log/log_flags.o 00:03:16.014 CC lib/log/log_deprecated.o 00:03:16.014 LIB libspdk_ut_mock.a 00:03:16.014 LIB libspdk_log.a 00:03:16.014 LIB libspdk_ut.a 00:03:16.014 SO libspdk_ut_mock.so.6.0 00:03:16.014 SO libspdk_ut.so.2.0 00:03:16.014 SO libspdk_log.so.7.0 00:03:16.014 SYMLINK libspdk_ut_mock.so 00:03:16.014 SYMLINK libspdk_ut.so 00:03:16.014 SYMLINK libspdk_log.so 00:03:16.272 CXX lib/trace_parser/trace.o 00:03:16.272 CC lib/ioat/ioat.o 00:03:16.272 CC lib/dma/dma.o 00:03:16.272 CC lib/util/base64.o 00:03:16.272 CC lib/util/cpuset.o 00:03:16.272 CC lib/util/bit_array.o 00:03:16.272 CC lib/util/crc16.o 00:03:16.272 CC lib/util/crc32.o 00:03:16.272 CC lib/util/crc32_ieee.o 00:03:16.272 CC lib/util/crc32c.o 00:03:16.272 CC lib/util/dif.o 00:03:16.272 CC lib/util/crc64.o 00:03:16.272 CC lib/util/fd.o 00:03:16.272 CC lib/util/file.o 00:03:16.272 CC lib/util/fd_group.o 00:03:16.272 CC lib/util/hexlify.o 00:03:16.272 CC lib/util/iov.o 00:03:16.272 CC lib/util/math.o 00:03:16.272 CC lib/util/net.o 00:03:16.273 CC lib/util/pipe.o 00:03:16.273 CC lib/util/strerror_tls.o 00:03:16.273 CC lib/util/string.o 00:03:16.273 CC lib/util/uuid.o 00:03:16.273 CC lib/util/xor.o 00:03:16.273 CC lib/util/md5.o 00:03:16.273 CC lib/util/zipf.o 00:03:16.531 CC lib/vfio_user/host/vfio_user_pci.o 00:03:16.531 CC lib/vfio_user/host/vfio_user.o 00:03:16.531 LIB libspdk_dma.a 00:03:16.531 SO libspdk_dma.so.5.0 00:03:16.531 LIB libspdk_ioat.a 00:03:16.790 SYMLINK libspdk_dma.so 00:03:16.790 SO libspdk_ioat.so.7.0 00:03:16.790 LIB libspdk_vfio_user.a 00:03:16.790 SYMLINK libspdk_ioat.so 00:03:16.790 SO libspdk_vfio_user.so.5.0 00:03:16.790 SYMLINK libspdk_vfio_user.so 00:03:16.790 LIB libspdk_util.a 00:03:16.790 SO libspdk_util.so.10.0 00:03:17.049 SYMLINK libspdk_util.so 00:03:17.049 LIB libspdk_trace_parser.a 00:03:17.049 SO libspdk_trace_parser.so.6.0 00:03:17.308 SYMLINK libspdk_trace_parser.so 00:03:17.308 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:17.308 CC lib/rdma_provider/common.o 00:03:17.308 CC lib/vmd/vmd.o 00:03:17.308 CC lib/vmd/led.o 00:03:17.308 CC lib/idxd/idxd_user.o 00:03:17.308 CC lib/env_dpdk/env.o 00:03:17.308 CC lib/idxd/idxd.o 00:03:17.308 CC lib/conf/conf.o 00:03:17.308 CC lib/env_dpdk/memory.o 00:03:17.308 CC lib/idxd/idxd_kernel.o 00:03:17.308 CC lib/env_dpdk/threads.o 00:03:17.308 CC lib/env_dpdk/pci.o 00:03:17.308 CC lib/env_dpdk/init.o 00:03:17.308 CC lib/env_dpdk/pci_ioat.o 00:03:17.308 CC lib/env_dpdk/pci_virtio.o 00:03:17.308 CC lib/env_dpdk/pci_vmd.o 00:03:17.308 CC lib/env_dpdk/pci_idxd.o 00:03:17.308 CC lib/json/json_parse.o 00:03:17.308 CC lib/env_dpdk/pci_event.o 00:03:17.308 CC lib/json/json_write.o 00:03:17.308 CC lib/env_dpdk/sigbus_handler.o 00:03:17.308 CC lib/json/json_util.o 00:03:17.308 CC lib/env_dpdk/pci_dpdk.o 00:03:17.308 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:17.308 CC lib/rdma_utils/rdma_utils.o 00:03:17.308 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:17.567 LIB libspdk_rdma_provider.a 00:03:17.567 SO libspdk_rdma_provider.so.6.0 00:03:17.567 LIB libspdk_conf.a 00:03:17.567 SYMLINK libspdk_rdma_provider.so 00:03:17.567 LIB libspdk_rdma_utils.a 00:03:17.567 SO libspdk_conf.so.6.0 00:03:17.567 LIB libspdk_json.a 00:03:17.567 SO libspdk_rdma_utils.so.1.0 00:03:17.567 SO libspdk_json.so.6.0 00:03:17.567 SYMLINK libspdk_conf.so 00:03:17.567 SYMLINK libspdk_rdma_utils.so 00:03:17.567 SYMLINK libspdk_json.so 00:03:17.825 LIB libspdk_idxd.a 00:03:17.825 SO libspdk_idxd.so.12.1 00:03:17.825 LIB libspdk_vmd.a 00:03:17.825 SO libspdk_vmd.so.6.0 00:03:17.825 SYMLINK libspdk_idxd.so 00:03:17.825 SYMLINK libspdk_vmd.so 00:03:17.825 CC lib/jsonrpc/jsonrpc_server.o 00:03:17.825 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:17.825 CC lib/jsonrpc/jsonrpc_client.o 00:03:17.825 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:18.084 LIB libspdk_jsonrpc.a 00:03:18.084 SO libspdk_jsonrpc.so.6.0 00:03:18.343 SYMLINK libspdk_jsonrpc.so 00:03:18.343 LIB libspdk_env_dpdk.a 00:03:18.343 SO libspdk_env_dpdk.so.15.0 00:03:18.602 SYMLINK libspdk_env_dpdk.so 00:03:18.602 CC lib/rpc/rpc.o 00:03:18.862 LIB libspdk_rpc.a 00:03:18.862 SO libspdk_rpc.so.6.0 00:03:18.862 SYMLINK libspdk_rpc.so 00:03:19.121 CC lib/trace/trace.o 00:03:19.121 CC lib/trace/trace_rpc.o 00:03:19.121 CC lib/trace/trace_flags.o 00:03:19.121 CC lib/notify/notify_rpc.o 00:03:19.121 CC lib/notify/notify.o 00:03:19.121 CC lib/keyring/keyring.o 00:03:19.121 CC lib/keyring/keyring_rpc.o 00:03:19.381 LIB libspdk_notify.a 00:03:19.381 SO libspdk_notify.so.6.0 00:03:19.381 LIB libspdk_trace.a 00:03:19.381 LIB libspdk_keyring.a 00:03:19.381 SYMLINK libspdk_notify.so 00:03:19.381 SO libspdk_trace.so.11.0 00:03:19.381 SO libspdk_keyring.so.2.0 00:03:19.381 SYMLINK libspdk_trace.so 00:03:19.381 SYMLINK libspdk_keyring.so 00:03:19.641 CC lib/sock/sock.o 00:03:19.641 CC lib/sock/sock_rpc.o 00:03:19.641 CC lib/thread/thread.o 00:03:19.641 CC lib/thread/iobuf.o 00:03:19.900 LIB libspdk_sock.a 00:03:20.159 SO libspdk_sock.so.10.0 00:03:20.159 SYMLINK libspdk_sock.so 00:03:20.418 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:20.418 CC lib/nvme/nvme_ctrlr.o 00:03:20.418 CC lib/nvme/nvme_fabric.o 00:03:20.418 CC lib/nvme/nvme_ns_cmd.o 00:03:20.418 CC lib/nvme/nvme_ns.o 00:03:20.418 CC lib/nvme/nvme_pcie_common.o 00:03:20.418 CC lib/nvme/nvme_pcie.o 00:03:20.418 CC lib/nvme/nvme_quirks.o 00:03:20.418 CC lib/nvme/nvme_qpair.o 00:03:20.418 CC lib/nvme/nvme.o 00:03:20.418 CC lib/nvme/nvme_transport.o 00:03:20.418 CC lib/nvme/nvme_discovery.o 00:03:20.418 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:20.418 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:20.418 CC lib/nvme/nvme_tcp.o 00:03:20.418 CC lib/nvme/nvme_opal.o 00:03:20.418 CC lib/nvme/nvme_io_msg.o 00:03:20.418 CC lib/nvme/nvme_poll_group.o 00:03:20.418 CC lib/nvme/nvme_zns.o 00:03:20.418 CC lib/nvme/nvme_stubs.o 00:03:20.418 CC lib/nvme/nvme_auth.o 00:03:20.418 CC lib/nvme/nvme_cuse.o 00:03:20.418 CC lib/nvme/nvme_vfio_user.o 00:03:20.418 CC lib/nvme/nvme_rdma.o 00:03:20.990 LIB libspdk_thread.a 00:03:20.990 SO libspdk_thread.so.10.2 00:03:20.990 SYMLINK libspdk_thread.so 00:03:21.250 CC lib/vfu_tgt/tgt_endpoint.o 00:03:21.250 CC lib/vfu_tgt/tgt_rpc.o 00:03:21.250 CC lib/blob/blobstore.o 00:03:21.250 CC lib/blob/zeroes.o 00:03:21.250 CC lib/blob/request.o 00:03:21.250 CC lib/blob/blob_bs_dev.o 00:03:21.250 CC lib/accel/accel_rpc.o 00:03:21.250 CC lib/accel/accel.o 00:03:21.250 CC lib/accel/accel_sw.o 00:03:21.250 CC lib/virtio/virtio.o 00:03:21.250 CC lib/fsdev/fsdev.o 00:03:21.250 CC lib/virtio/virtio_vhost_user.o 00:03:21.250 CC lib/init/json_config.o 00:03:21.250 CC lib/fsdev/fsdev_io.o 00:03:21.250 CC lib/virtio/virtio_vfio_user.o 00:03:21.250 CC lib/init/subsystem.o 00:03:21.250 CC lib/init/rpc.o 00:03:21.250 CC lib/fsdev/fsdev_rpc.o 00:03:21.250 CC lib/init/subsystem_rpc.o 00:03:21.250 CC lib/virtio/virtio_pci.o 00:03:21.508 LIB libspdk_init.a 00:03:21.508 LIB libspdk_vfu_tgt.a 00:03:21.508 LIB libspdk_virtio.a 00:03:21.508 SO libspdk_init.so.6.0 00:03:21.508 SO libspdk_vfu_tgt.so.3.0 00:03:21.508 SO libspdk_virtio.so.7.0 00:03:21.508 SYMLINK libspdk_init.so 00:03:21.508 SYMLINK libspdk_vfu_tgt.so 00:03:21.508 SYMLINK libspdk_virtio.so 00:03:21.768 LIB libspdk_fsdev.a 00:03:21.768 SO libspdk_fsdev.so.1.0 00:03:21.768 SYMLINK libspdk_fsdev.so 00:03:21.768 CC lib/event/app.o 00:03:21.768 CC lib/event/reactor.o 00:03:21.768 CC lib/event/app_rpc.o 00:03:21.768 CC lib/event/scheduler_static.o 00:03:21.768 CC lib/event/log_rpc.o 00:03:22.027 LIB libspdk_accel.a 00:03:22.027 SO libspdk_accel.so.16.0 00:03:22.027 LIB libspdk_nvme.a 00:03:22.027 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:22.027 SYMLINK libspdk_accel.so 00:03:22.286 LIB libspdk_event.a 00:03:22.286 SO libspdk_nvme.so.14.0 00:03:22.286 SO libspdk_event.so.15.0 00:03:22.286 SYMLINK libspdk_event.so 00:03:22.286 SYMLINK libspdk_nvme.so 00:03:22.544 CC lib/bdev/bdev.o 00:03:22.544 CC lib/bdev/bdev_rpc.o 00:03:22.544 CC lib/bdev/bdev_zone.o 00:03:22.544 CC lib/bdev/part.o 00:03:22.544 CC lib/bdev/scsi_nvme.o 00:03:22.544 LIB libspdk_fuse_dispatcher.a 00:03:22.544 SO libspdk_fuse_dispatcher.so.1.0 00:03:22.803 SYMLINK libspdk_fuse_dispatcher.so 00:03:23.369 LIB libspdk_blob.a 00:03:23.369 SO libspdk_blob.so.11.0 00:03:23.369 SYMLINK libspdk_blob.so 00:03:23.629 CC lib/blobfs/blobfs.o 00:03:23.629 CC lib/blobfs/tree.o 00:03:23.888 CC lib/lvol/lvol.o 00:03:24.456 LIB libspdk_bdev.a 00:03:24.456 SO libspdk_bdev.so.17.0 00:03:24.456 LIB libspdk_blobfs.a 00:03:24.456 SO libspdk_blobfs.so.10.0 00:03:24.456 LIB libspdk_lvol.a 00:03:24.456 SYMLINK libspdk_bdev.so 00:03:24.456 SO libspdk_lvol.so.10.0 00:03:24.456 SYMLINK libspdk_blobfs.so 00:03:24.456 SYMLINK libspdk_lvol.so 00:03:24.717 CC lib/nbd/nbd.o 00:03:24.717 CC lib/nbd/nbd_rpc.o 00:03:24.717 CC lib/nvmf/ctrlr.o 00:03:24.717 CC lib/nvmf/ctrlr_discovery.o 00:03:24.717 CC lib/nvmf/ctrlr_bdev.o 00:03:24.717 CC lib/nvmf/subsystem.o 00:03:24.717 CC lib/nvmf/nvmf.o 00:03:24.717 CC lib/nvmf/nvmf_rpc.o 00:03:24.717 CC lib/nvmf/transport.o 00:03:24.717 CC lib/nvmf/tcp.o 00:03:24.717 CC lib/nvmf/stubs.o 00:03:24.717 CC lib/nvmf/mdns_server.o 00:03:24.717 CC lib/nvmf/rdma.o 00:03:24.717 CC lib/nvmf/vfio_user.o 00:03:24.717 CC lib/nvmf/auth.o 00:03:24.717 CC lib/ublk/ublk.o 00:03:24.717 CC lib/ublk/ublk_rpc.o 00:03:24.717 CC lib/scsi/dev.o 00:03:24.717 CC lib/scsi/port.o 00:03:24.717 CC lib/scsi/lun.o 00:03:24.717 CC lib/scsi/scsi.o 00:03:24.717 CC lib/scsi/scsi_bdev.o 00:03:24.717 CC lib/scsi/scsi_rpc.o 00:03:24.717 CC lib/scsi/scsi_pr.o 00:03:24.717 CC lib/scsi/task.o 00:03:24.717 CC lib/ftl/ftl_core.o 00:03:24.717 CC lib/ftl/ftl_init.o 00:03:24.717 CC lib/ftl/ftl_layout.o 00:03:24.717 CC lib/ftl/ftl_debug.o 00:03:24.717 CC lib/ftl/ftl_io.o 00:03:24.717 CC lib/ftl/ftl_sb.o 00:03:24.717 CC lib/ftl/ftl_l2p_flat.o 00:03:24.717 CC lib/ftl/ftl_l2p.o 00:03:24.717 CC lib/ftl/ftl_nv_cache.o 00:03:24.717 CC lib/ftl/ftl_band.o 00:03:24.717 CC lib/ftl/ftl_band_ops.o 00:03:24.717 CC lib/ftl/ftl_writer.o 00:03:24.717 CC lib/ftl/ftl_rq.o 00:03:24.717 CC lib/ftl/ftl_reloc.o 00:03:24.717 CC lib/ftl/ftl_l2p_cache.o 00:03:24.717 CC lib/ftl/ftl_p2l.o 00:03:24.717 CC lib/ftl/ftl_p2l_log.o 00:03:24.717 CC lib/ftl/mngt/ftl_mngt.o 00:03:24.717 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:24.717 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:24.717 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:24.717 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:24.717 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:24.717 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:24.717 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:24.717 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:24.717 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:24.717 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:24.717 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:24.717 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:24.717 CC lib/ftl/utils/ftl_conf.o 00:03:24.717 CC lib/ftl/utils/ftl_md.o 00:03:24.717 CC lib/ftl/utils/ftl_mempool.o 00:03:24.717 CC lib/ftl/utils/ftl_bitmap.o 00:03:24.717 CC lib/ftl/utils/ftl_property.o 00:03:24.717 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:24.717 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:24.717 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:24.717 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:24.717 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:24.717 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:24.717 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:24.717 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:24.717 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:24.717 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:24.717 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:24.717 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:24.717 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:24.717 CC lib/ftl/base/ftl_base_dev.o 00:03:24.717 CC lib/ftl/ftl_trace.o 00:03:24.717 CC lib/ftl/base/ftl_base_bdev.o 00:03:25.287 LIB libspdk_nbd.a 00:03:25.580 SO libspdk_nbd.so.7.0 00:03:25.580 SYMLINK libspdk_nbd.so 00:03:25.580 LIB libspdk_scsi.a 00:03:25.580 LIB libspdk_ublk.a 00:03:25.580 SO libspdk_scsi.so.9.0 00:03:25.580 SO libspdk_ublk.so.3.0 00:03:25.580 SYMLINK libspdk_scsi.so 00:03:25.580 SYMLINK libspdk_ublk.so 00:03:25.900 LIB libspdk_ftl.a 00:03:25.900 SO libspdk_ftl.so.9.0 00:03:25.900 CC lib/vhost/vhost.o 00:03:25.900 CC lib/vhost/vhost_rpc.o 00:03:25.900 CC lib/vhost/vhost_scsi.o 00:03:25.900 CC lib/vhost/vhost_blk.o 00:03:25.900 CC lib/iscsi/conn.o 00:03:25.900 CC lib/vhost/rte_vhost_user.o 00:03:25.900 CC lib/iscsi/init_grp.o 00:03:25.900 CC lib/iscsi/iscsi.o 00:03:25.900 CC lib/iscsi/param.o 00:03:25.900 CC lib/iscsi/portal_grp.o 00:03:25.900 CC lib/iscsi/tgt_node.o 00:03:25.900 CC lib/iscsi/iscsi_subsystem.o 00:03:25.900 CC lib/iscsi/iscsi_rpc.o 00:03:25.900 CC lib/iscsi/task.o 00:03:26.162 SYMLINK libspdk_ftl.so 00:03:26.421 LIB libspdk_nvmf.a 00:03:26.421 SO libspdk_nvmf.so.19.0 00:03:26.680 SYMLINK libspdk_nvmf.so 00:03:26.680 LIB libspdk_vhost.a 00:03:26.680 SO libspdk_vhost.so.8.0 00:03:26.938 SYMLINK libspdk_vhost.so 00:03:26.938 LIB libspdk_iscsi.a 00:03:26.938 SO libspdk_iscsi.so.8.0 00:03:26.938 SYMLINK libspdk_iscsi.so 00:03:27.505 CC module/vfu_device/vfu_virtio_blk.o 00:03:27.505 CC module/vfu_device/vfu_virtio.o 00:03:27.505 CC module/vfu_device/vfu_virtio_rpc.o 00:03:27.505 CC module/vfu_device/vfu_virtio_scsi.o 00:03:27.505 CC module/vfu_device/vfu_virtio_fs.o 00:03:27.505 CC module/env_dpdk/env_dpdk_rpc.o 00:03:27.764 CC module/accel/ioat/accel_ioat.o 00:03:27.764 CC module/accel/ioat/accel_ioat_rpc.o 00:03:27.764 CC module/fsdev/aio/fsdev_aio.o 00:03:27.764 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:27.764 CC module/fsdev/aio/linux_aio_mgr.o 00:03:27.764 CC module/sock/posix/posix.o 00:03:27.764 CC module/accel/dsa/accel_dsa_rpc.o 00:03:27.764 CC module/accel/dsa/accel_dsa.o 00:03:27.764 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:27.764 CC module/keyring/file/keyring.o 00:03:27.764 CC module/keyring/file/keyring_rpc.o 00:03:27.764 CC module/accel/error/accel_error.o 00:03:27.764 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:27.764 CC module/accel/error/accel_error_rpc.o 00:03:27.764 CC module/accel/iaa/accel_iaa.o 00:03:27.764 CC module/blob/bdev/blob_bdev.o 00:03:27.764 CC module/accel/iaa/accel_iaa_rpc.o 00:03:27.764 CC module/scheduler/gscheduler/gscheduler.o 00:03:27.764 CC module/keyring/linux/keyring.o 00:03:27.764 CC module/keyring/linux/keyring_rpc.o 00:03:27.764 LIB libspdk_env_dpdk_rpc.a 00:03:27.764 SO libspdk_env_dpdk_rpc.so.6.0 00:03:27.764 SYMLINK libspdk_env_dpdk_rpc.so 00:03:27.764 LIB libspdk_scheduler_dpdk_governor.a 00:03:27.764 LIB libspdk_keyring_file.a 00:03:27.764 LIB libspdk_accel_ioat.a 00:03:27.764 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:27.764 LIB libspdk_keyring_linux.a 00:03:27.764 LIB libspdk_scheduler_gscheduler.a 00:03:27.764 LIB libspdk_accel_error.a 00:03:27.764 SO libspdk_keyring_file.so.2.0 00:03:27.764 LIB libspdk_scheduler_dynamic.a 00:03:27.764 LIB libspdk_accel_iaa.a 00:03:27.764 SO libspdk_accel_ioat.so.6.0 00:03:27.765 SO libspdk_scheduler_gscheduler.so.4.0 00:03:27.765 SO libspdk_keyring_linux.so.1.0 00:03:27.765 SO libspdk_accel_error.so.2.0 00:03:27.765 SO libspdk_scheduler_dynamic.so.4.0 00:03:28.023 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:28.023 SO libspdk_accel_iaa.so.3.0 00:03:28.023 SYMLINK libspdk_keyring_file.so 00:03:28.023 LIB libspdk_accel_dsa.a 00:03:28.023 SYMLINK libspdk_scheduler_gscheduler.so 00:03:28.023 LIB libspdk_blob_bdev.a 00:03:28.023 SYMLINK libspdk_accel_ioat.so 00:03:28.023 SYMLINK libspdk_keyring_linux.so 00:03:28.023 SYMLINK libspdk_accel_error.so 00:03:28.023 SYMLINK libspdk_scheduler_dynamic.so 00:03:28.023 SO libspdk_accel_dsa.so.5.0 00:03:28.023 SYMLINK libspdk_accel_iaa.so 00:03:28.023 SO libspdk_blob_bdev.so.11.0 00:03:28.023 SYMLINK libspdk_accel_dsa.so 00:03:28.023 SYMLINK libspdk_blob_bdev.so 00:03:28.023 LIB libspdk_vfu_device.a 00:03:28.023 SO libspdk_vfu_device.so.3.0 00:03:28.282 LIB libspdk_fsdev_aio.a 00:03:28.282 SYMLINK libspdk_vfu_device.so 00:03:28.282 SO libspdk_fsdev_aio.so.1.0 00:03:28.282 LIB libspdk_sock_posix.a 00:03:28.282 SYMLINK libspdk_fsdev_aio.so 00:03:28.282 SO libspdk_sock_posix.so.6.0 00:03:28.282 SYMLINK libspdk_sock_posix.so 00:03:28.542 CC module/bdev/gpt/vbdev_gpt.o 00:03:28.542 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:28.542 CC module/bdev/gpt/gpt.o 00:03:28.542 CC module/bdev/lvol/vbdev_lvol.o 00:03:28.542 CC module/bdev/null/bdev_null.o 00:03:28.542 CC module/bdev/null/bdev_null_rpc.o 00:03:28.542 CC module/blobfs/bdev/blobfs_bdev.o 00:03:28.542 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:28.542 CC module/bdev/raid/bdev_raid.o 00:03:28.542 CC module/bdev/aio/bdev_aio.o 00:03:28.542 CC module/bdev/nvme/bdev_nvme.o 00:03:28.542 CC module/bdev/raid/bdev_raid_rpc.o 00:03:28.542 CC module/bdev/aio/bdev_aio_rpc.o 00:03:28.542 CC module/bdev/delay/vbdev_delay.o 00:03:28.542 CC module/bdev/nvme/nvme_rpc.o 00:03:28.542 CC module/bdev/raid/bdev_raid_sb.o 00:03:28.542 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:28.542 CC module/bdev/passthru/vbdev_passthru.o 00:03:28.542 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:28.542 CC module/bdev/raid/raid0.o 00:03:28.542 CC module/bdev/nvme/bdev_mdns_client.o 00:03:28.542 CC module/bdev/nvme/vbdev_opal.o 00:03:28.542 CC module/bdev/raid/raid1.o 00:03:28.542 CC module/bdev/raid/concat.o 00:03:28.542 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:28.542 CC module/bdev/ftl/bdev_ftl.o 00:03:28.542 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:28.542 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:28.542 CC module/bdev/malloc/bdev_malloc.o 00:03:28.542 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:28.542 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:28.542 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:28.542 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:28.542 CC module/bdev/error/vbdev_error_rpc.o 00:03:28.542 CC module/bdev/error/vbdev_error.o 00:03:28.542 CC module/bdev/split/vbdev_split.o 00:03:28.542 CC module/bdev/split/vbdev_split_rpc.o 00:03:28.542 CC module/bdev/iscsi/bdev_iscsi.o 00:03:28.542 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:28.542 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:28.542 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:28.542 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:28.802 LIB libspdk_blobfs_bdev.a 00:03:28.802 SO libspdk_blobfs_bdev.so.6.0 00:03:28.802 LIB libspdk_bdev_split.a 00:03:28.802 LIB libspdk_bdev_error.a 00:03:28.802 SO libspdk_bdev_error.so.6.0 00:03:28.802 LIB libspdk_bdev_gpt.a 00:03:28.802 LIB libspdk_bdev_passthru.a 00:03:28.802 SO libspdk_bdev_split.so.6.0 00:03:28.802 LIB libspdk_bdev_null.a 00:03:28.802 SYMLINK libspdk_blobfs_bdev.so 00:03:28.802 LIB libspdk_bdev_zone_block.a 00:03:28.802 SO libspdk_bdev_gpt.so.6.0 00:03:28.802 SO libspdk_bdev_passthru.so.6.0 00:03:28.802 SO libspdk_bdev_null.so.6.0 00:03:28.802 LIB libspdk_bdev_ftl.a 00:03:28.802 SO libspdk_bdev_zone_block.so.6.0 00:03:28.802 LIB libspdk_bdev_aio.a 00:03:28.802 LIB libspdk_bdev_iscsi.a 00:03:28.802 SYMLINK libspdk_bdev_error.so 00:03:28.802 SYMLINK libspdk_bdev_split.so 00:03:28.802 LIB libspdk_bdev_delay.a 00:03:28.802 SYMLINK libspdk_bdev_gpt.so 00:03:28.802 SO libspdk_bdev_ftl.so.6.0 00:03:29.061 LIB libspdk_bdev_malloc.a 00:03:29.061 SO libspdk_bdev_aio.so.6.0 00:03:29.061 SO libspdk_bdev_iscsi.so.6.0 00:03:29.061 SYMLINK libspdk_bdev_passthru.so 00:03:29.061 SYMLINK libspdk_bdev_null.so 00:03:29.061 SO libspdk_bdev_delay.so.6.0 00:03:29.061 SYMLINK libspdk_bdev_zone_block.so 00:03:29.061 SO libspdk_bdev_malloc.so.6.0 00:03:29.061 LIB libspdk_bdev_lvol.a 00:03:29.061 SYMLINK libspdk_bdev_ftl.so 00:03:29.061 SYMLINK libspdk_bdev_aio.so 00:03:29.061 SYMLINK libspdk_bdev_iscsi.so 00:03:29.061 SYMLINK libspdk_bdev_delay.so 00:03:29.061 SO libspdk_bdev_lvol.so.6.0 00:03:29.061 SYMLINK libspdk_bdev_malloc.so 00:03:29.061 LIB libspdk_bdev_virtio.a 00:03:29.061 SYMLINK libspdk_bdev_lvol.so 00:03:29.061 SO libspdk_bdev_virtio.so.6.0 00:03:29.061 SYMLINK libspdk_bdev_virtio.so 00:03:29.320 LIB libspdk_bdev_raid.a 00:03:29.320 SO libspdk_bdev_raid.so.6.0 00:03:29.579 SYMLINK libspdk_bdev_raid.so 00:03:30.145 LIB libspdk_bdev_nvme.a 00:03:30.145 SO libspdk_bdev_nvme.so.7.0 00:03:30.404 SYMLINK libspdk_bdev_nvme.so 00:03:30.970 CC module/event/subsystems/scheduler/scheduler.o 00:03:30.970 CC module/event/subsystems/sock/sock.o 00:03:30.970 CC module/event/subsystems/keyring/keyring.o 00:03:30.970 CC module/event/subsystems/vmd/vmd.o 00:03:30.970 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:30.970 CC module/event/subsystems/iobuf/iobuf.o 00:03:30.970 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:30.970 CC module/event/subsystems/fsdev/fsdev.o 00:03:30.970 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:30.970 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:30.970 LIB libspdk_event_scheduler.a 00:03:31.227 LIB libspdk_event_fsdev.a 00:03:31.227 LIB libspdk_event_vhost_blk.a 00:03:31.227 LIB libspdk_event_keyring.a 00:03:31.227 SO libspdk_event_scheduler.so.4.0 00:03:31.227 LIB libspdk_event_vmd.a 00:03:31.227 LIB libspdk_event_sock.a 00:03:31.227 SO libspdk_event_vhost_blk.so.3.0 00:03:31.227 LIB libspdk_event_iobuf.a 00:03:31.227 LIB libspdk_event_vfu_tgt.a 00:03:31.227 SO libspdk_event_fsdev.so.1.0 00:03:31.227 SO libspdk_event_keyring.so.1.0 00:03:31.227 SO libspdk_event_vmd.so.6.0 00:03:31.227 SO libspdk_event_sock.so.5.0 00:03:31.227 SO libspdk_event_iobuf.so.3.0 00:03:31.227 SO libspdk_event_vfu_tgt.so.3.0 00:03:31.227 SYMLINK libspdk_event_scheduler.so 00:03:31.227 SYMLINK libspdk_event_vhost_blk.so 00:03:31.227 SYMLINK libspdk_event_fsdev.so 00:03:31.227 SYMLINK libspdk_event_keyring.so 00:03:31.227 SYMLINK libspdk_event_sock.so 00:03:31.227 SYMLINK libspdk_event_vmd.so 00:03:31.227 SYMLINK libspdk_event_iobuf.so 00:03:31.227 SYMLINK libspdk_event_vfu_tgt.so 00:03:31.487 CC module/event/subsystems/accel/accel.o 00:03:31.746 LIB libspdk_event_accel.a 00:03:31.746 SO libspdk_event_accel.so.6.0 00:03:31.746 SYMLINK libspdk_event_accel.so 00:03:32.005 CC module/event/subsystems/bdev/bdev.o 00:03:32.263 LIB libspdk_event_bdev.a 00:03:32.263 SO libspdk_event_bdev.so.6.0 00:03:32.263 SYMLINK libspdk_event_bdev.so 00:03:32.521 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:32.521 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:32.521 CC module/event/subsystems/scsi/scsi.o 00:03:32.521 CC module/event/subsystems/ublk/ublk.o 00:03:32.521 CC module/event/subsystems/nbd/nbd.o 00:03:32.781 LIB libspdk_event_ublk.a 00:03:32.781 LIB libspdk_event_scsi.a 00:03:32.781 LIB libspdk_event_nbd.a 00:03:32.781 SO libspdk_event_ublk.so.3.0 00:03:32.781 SO libspdk_event_scsi.so.6.0 00:03:32.781 SO libspdk_event_nbd.so.6.0 00:03:32.781 LIB libspdk_event_nvmf.a 00:03:32.781 SYMLINK libspdk_event_ublk.so 00:03:32.781 SYMLINK libspdk_event_scsi.so 00:03:32.781 SO libspdk_event_nvmf.so.6.0 00:03:32.781 SYMLINK libspdk_event_nbd.so 00:03:32.781 SYMLINK libspdk_event_nvmf.so 00:03:33.040 CC module/event/subsystems/iscsi/iscsi.o 00:03:33.040 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:33.300 LIB libspdk_event_iscsi.a 00:03:33.300 LIB libspdk_event_vhost_scsi.a 00:03:33.300 SO libspdk_event_iscsi.so.6.0 00:03:33.300 SO libspdk_event_vhost_scsi.so.3.0 00:03:33.300 SYMLINK libspdk_event_iscsi.so 00:03:33.300 SYMLINK libspdk_event_vhost_scsi.so 00:03:33.559 SO libspdk.so.6.0 00:03:33.559 SYMLINK libspdk.so 00:03:33.824 CC app/spdk_top/spdk_top.o 00:03:33.824 CC test/rpc_client/rpc_client_test.o 00:03:33.824 CXX app/trace/trace.o 00:03:33.824 CC app/spdk_nvme_discover/discovery_aer.o 00:03:33.824 TEST_HEADER include/spdk/accel.h 00:03:33.824 TEST_HEADER include/spdk/accel_module.h 00:03:33.824 TEST_HEADER include/spdk/assert.h 00:03:33.824 TEST_HEADER include/spdk/barrier.h 00:03:33.824 TEST_HEADER include/spdk/base64.h 00:03:33.824 CC app/trace_record/trace_record.o 00:03:33.824 TEST_HEADER include/spdk/bdev_zone.h 00:03:33.824 CC app/spdk_nvme_perf/perf.o 00:03:33.824 TEST_HEADER include/spdk/bdev.h 00:03:33.824 TEST_HEADER include/spdk/bdev_module.h 00:03:33.824 TEST_HEADER include/spdk/bit_pool.h 00:03:33.824 TEST_HEADER include/spdk/bit_array.h 00:03:33.824 TEST_HEADER include/spdk/blob_bdev.h 00:03:33.824 TEST_HEADER include/spdk/blobfs.h 00:03:33.824 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:33.824 CC app/spdk_lspci/spdk_lspci.o 00:03:33.824 TEST_HEADER include/spdk/blob.h 00:03:33.825 TEST_HEADER include/spdk/conf.h 00:03:33.825 TEST_HEADER include/spdk/crc16.h 00:03:33.825 TEST_HEADER include/spdk/config.h 00:03:33.825 TEST_HEADER include/spdk/crc32.h 00:03:33.825 TEST_HEADER include/spdk/cpuset.h 00:03:33.825 TEST_HEADER include/spdk/crc64.h 00:03:33.825 TEST_HEADER include/spdk/dma.h 00:03:33.825 TEST_HEADER include/spdk/dif.h 00:03:33.825 TEST_HEADER include/spdk/env_dpdk.h 00:03:33.825 TEST_HEADER include/spdk/endian.h 00:03:33.825 TEST_HEADER include/spdk/event.h 00:03:33.825 TEST_HEADER include/spdk/env.h 00:03:33.825 TEST_HEADER include/spdk/file.h 00:03:33.825 TEST_HEADER include/spdk/fd_group.h 00:03:33.825 TEST_HEADER include/spdk/fd.h 00:03:33.825 CC app/spdk_nvme_identify/identify.o 00:03:33.825 TEST_HEADER include/spdk/fsdev_module.h 00:03:33.825 TEST_HEADER include/spdk/fsdev.h 00:03:33.825 TEST_HEADER include/spdk/ftl.h 00:03:33.825 TEST_HEADER include/spdk/gpt_spec.h 00:03:33.825 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:33.825 TEST_HEADER include/spdk/hexlify.h 00:03:33.825 TEST_HEADER include/spdk/idxd_spec.h 00:03:33.825 TEST_HEADER include/spdk/idxd.h 00:03:33.825 TEST_HEADER include/spdk/histogram_data.h 00:03:33.825 TEST_HEADER include/spdk/init.h 00:03:33.825 TEST_HEADER include/spdk/ioat.h 00:03:33.825 TEST_HEADER include/spdk/ioat_spec.h 00:03:33.825 TEST_HEADER include/spdk/iscsi_spec.h 00:03:33.825 TEST_HEADER include/spdk/json.h 00:03:33.825 TEST_HEADER include/spdk/keyring.h 00:03:33.825 TEST_HEADER include/spdk/jsonrpc.h 00:03:33.825 TEST_HEADER include/spdk/keyring_module.h 00:03:33.825 TEST_HEADER include/spdk/likely.h 00:03:33.825 TEST_HEADER include/spdk/log.h 00:03:33.825 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:33.825 TEST_HEADER include/spdk/lvol.h 00:03:33.825 TEST_HEADER include/spdk/md5.h 00:03:33.825 TEST_HEADER include/spdk/memory.h 00:03:33.825 TEST_HEADER include/spdk/net.h 00:03:33.825 TEST_HEADER include/spdk/mmio.h 00:03:33.825 TEST_HEADER include/spdk/notify.h 00:03:33.825 TEST_HEADER include/spdk/nbd.h 00:03:33.825 CC app/spdk_dd/spdk_dd.o 00:03:33.825 TEST_HEADER include/spdk/nvme.h 00:03:33.825 TEST_HEADER include/spdk/nvme_intel.h 00:03:33.825 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:33.825 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:33.825 TEST_HEADER include/spdk/nvme_spec.h 00:03:33.825 TEST_HEADER include/spdk/nvme_zns.h 00:03:33.825 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:33.825 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:33.825 TEST_HEADER include/spdk/nvmf.h 00:03:33.825 TEST_HEADER include/spdk/nvmf_spec.h 00:03:33.825 TEST_HEADER include/spdk/nvmf_transport.h 00:03:33.825 TEST_HEADER include/spdk/opal.h 00:03:33.825 TEST_HEADER include/spdk/opal_spec.h 00:03:33.825 TEST_HEADER include/spdk/pipe.h 00:03:33.825 TEST_HEADER include/spdk/pci_ids.h 00:03:33.825 TEST_HEADER include/spdk/queue.h 00:03:33.825 TEST_HEADER include/spdk/reduce.h 00:03:33.825 TEST_HEADER include/spdk/scheduler.h 00:03:33.825 TEST_HEADER include/spdk/rpc.h 00:03:33.825 TEST_HEADER include/spdk/scsi.h 00:03:33.825 TEST_HEADER include/spdk/scsi_spec.h 00:03:33.825 TEST_HEADER include/spdk/sock.h 00:03:33.825 TEST_HEADER include/spdk/stdinc.h 00:03:33.825 TEST_HEADER include/spdk/string.h 00:03:33.825 TEST_HEADER include/spdk/thread.h 00:03:33.825 CC app/nvmf_tgt/nvmf_main.o 00:03:33.825 TEST_HEADER include/spdk/trace.h 00:03:33.825 TEST_HEADER include/spdk/trace_parser.h 00:03:33.825 TEST_HEADER include/spdk/ublk.h 00:03:33.825 TEST_HEADER include/spdk/tree.h 00:03:33.825 TEST_HEADER include/spdk/uuid.h 00:03:33.825 TEST_HEADER include/spdk/version.h 00:03:33.825 TEST_HEADER include/spdk/util.h 00:03:33.825 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:33.825 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:33.825 TEST_HEADER include/spdk/vhost.h 00:03:33.825 TEST_HEADER include/spdk/xor.h 00:03:33.825 TEST_HEADER include/spdk/vmd.h 00:03:33.825 CXX test/cpp_headers/accel.o 00:03:33.825 CC app/spdk_tgt/spdk_tgt.o 00:03:33.825 CC app/iscsi_tgt/iscsi_tgt.o 00:03:33.825 CXX test/cpp_headers/accel_module.o 00:03:33.825 TEST_HEADER include/spdk/zipf.h 00:03:33.825 CXX test/cpp_headers/assert.o 00:03:33.825 CXX test/cpp_headers/barrier.o 00:03:33.825 CXX test/cpp_headers/base64.o 00:03:33.825 CXX test/cpp_headers/bdev_module.o 00:03:33.825 CXX test/cpp_headers/bit_array.o 00:03:33.825 CXX test/cpp_headers/bit_pool.o 00:03:33.825 CXX test/cpp_headers/bdev_zone.o 00:03:33.825 CXX test/cpp_headers/bdev.o 00:03:33.825 CXX test/cpp_headers/blob_bdev.o 00:03:33.825 CXX test/cpp_headers/blobfs_bdev.o 00:03:33.825 CXX test/cpp_headers/blob.o 00:03:33.825 CXX test/cpp_headers/blobfs.o 00:03:33.825 CXX test/cpp_headers/conf.o 00:03:33.825 CXX test/cpp_headers/cpuset.o 00:03:33.825 CXX test/cpp_headers/config.o 00:03:33.825 CXX test/cpp_headers/crc16.o 00:03:33.825 CXX test/cpp_headers/crc64.o 00:03:33.825 CXX test/cpp_headers/dif.o 00:03:33.825 CXX test/cpp_headers/crc32.o 00:03:33.825 CXX test/cpp_headers/endian.o 00:03:33.825 CXX test/cpp_headers/dma.o 00:03:33.825 CXX test/cpp_headers/env_dpdk.o 00:03:33.825 CXX test/cpp_headers/event.o 00:03:33.825 CXX test/cpp_headers/env.o 00:03:33.825 CXX test/cpp_headers/fd_group.o 00:03:33.825 CXX test/cpp_headers/fsdev.o 00:03:33.825 CXX test/cpp_headers/fd.o 00:03:33.825 CXX test/cpp_headers/fsdev_module.o 00:03:33.825 CXX test/cpp_headers/ftl.o 00:03:33.825 CXX test/cpp_headers/file.o 00:03:33.825 CXX test/cpp_headers/fuse_dispatcher.o 00:03:33.825 CXX test/cpp_headers/hexlify.o 00:03:33.825 CXX test/cpp_headers/histogram_data.o 00:03:33.825 CXX test/cpp_headers/gpt_spec.o 00:03:33.825 CXX test/cpp_headers/idxd.o 00:03:33.825 CXX test/cpp_headers/ioat.o 00:03:33.825 CXX test/cpp_headers/init.o 00:03:33.825 CXX test/cpp_headers/idxd_spec.o 00:03:33.825 CXX test/cpp_headers/ioat_spec.o 00:03:33.825 CXX test/cpp_headers/jsonrpc.o 00:03:33.825 CXX test/cpp_headers/json.o 00:03:33.825 CXX test/cpp_headers/iscsi_spec.o 00:03:33.825 CXX test/cpp_headers/keyring_module.o 00:03:33.825 CXX test/cpp_headers/keyring.o 00:03:33.825 CXX test/cpp_headers/likely.o 00:03:33.825 CXX test/cpp_headers/lvol.o 00:03:33.825 CXX test/cpp_headers/log.o 00:03:33.825 CXX test/cpp_headers/memory.o 00:03:33.825 CXX test/cpp_headers/md5.o 00:03:33.825 CXX test/cpp_headers/mmio.o 00:03:33.825 CXX test/cpp_headers/nbd.o 00:03:33.825 CXX test/cpp_headers/net.o 00:03:33.825 CXX test/cpp_headers/nvme.o 00:03:33.825 CXX test/cpp_headers/nvme_intel.o 00:03:33.825 CXX test/cpp_headers/notify.o 00:03:33.825 CXX test/cpp_headers/nvme_ocssd.o 00:03:33.825 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:33.825 CXX test/cpp_headers/nvme_zns.o 00:03:33.825 CXX test/cpp_headers/nvme_spec.o 00:03:33.825 CXX test/cpp_headers/nvmf_cmd.o 00:03:33.825 CXX test/cpp_headers/nvmf.o 00:03:33.825 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:33.825 CXX test/cpp_headers/nvmf_spec.o 00:03:33.825 CXX test/cpp_headers/nvmf_transport.o 00:03:34.094 CXX test/cpp_headers/opal.o 00:03:34.094 CC examples/ioat/perf/perf.o 00:03:34.094 CC test/app/histogram_perf/histogram_perf.o 00:03:34.094 CXX test/cpp_headers/opal_spec.o 00:03:34.094 CC examples/ioat/verify/verify.o 00:03:34.094 CC examples/util/zipf/zipf.o 00:03:34.094 CC test/app/jsoncat/jsoncat.o 00:03:34.094 CC test/thread/poller_perf/poller_perf.o 00:03:34.094 CC test/app/stub/stub.o 00:03:34.094 CC test/env/memory/memory_ut.o 00:03:34.094 CC test/env/pci/pci_ut.o 00:03:34.094 CC test/app/bdev_svc/bdev_svc.o 00:03:34.094 CC app/fio/nvme/fio_plugin.o 00:03:34.094 CC test/env/vtophys/vtophys.o 00:03:34.094 CC test/dma/test_dma/test_dma.o 00:03:34.094 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:34.094 CC app/fio/bdev/fio_plugin.o 00:03:34.372 LINK spdk_lspci 00:03:34.372 LINK rpc_client_test 00:03:34.372 LINK nvmf_tgt 00:03:34.630 LINK interrupt_tgt 00:03:34.630 LINK spdk_nvme_discover 00:03:34.630 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:34.630 CC test/env/mem_callbacks/mem_callbacks.o 00:03:34.630 LINK histogram_perf 00:03:34.630 LINK poller_perf 00:03:34.630 LINK zipf 00:03:34.630 CXX test/cpp_headers/pci_ids.o 00:03:34.630 CXX test/cpp_headers/pipe.o 00:03:34.630 CXX test/cpp_headers/queue.o 00:03:34.630 CXX test/cpp_headers/reduce.o 00:03:34.630 CXX test/cpp_headers/rpc.o 00:03:34.630 CXX test/cpp_headers/scheduler.o 00:03:34.630 CXX test/cpp_headers/scsi.o 00:03:34.630 CXX test/cpp_headers/scsi_spec.o 00:03:34.630 CXX test/cpp_headers/sock.o 00:03:34.630 CXX test/cpp_headers/stdinc.o 00:03:34.630 CXX test/cpp_headers/string.o 00:03:34.630 CXX test/cpp_headers/thread.o 00:03:34.630 CXX test/cpp_headers/trace.o 00:03:34.630 CXX test/cpp_headers/trace_parser.o 00:03:34.630 CXX test/cpp_headers/tree.o 00:03:34.630 CXX test/cpp_headers/util.o 00:03:34.630 CXX test/cpp_headers/ublk.o 00:03:34.630 CXX test/cpp_headers/uuid.o 00:03:34.630 LINK spdk_tgt 00:03:34.630 CXX test/cpp_headers/version.o 00:03:34.630 CXX test/cpp_headers/vfio_user_pci.o 00:03:34.630 CXX test/cpp_headers/vfio_user_spec.o 00:03:34.630 CXX test/cpp_headers/vhost.o 00:03:34.630 LINK jsoncat 00:03:34.630 CXX test/cpp_headers/vmd.o 00:03:34.630 LINK iscsi_tgt 00:03:34.630 CXX test/cpp_headers/xor.o 00:03:34.630 CXX test/cpp_headers/zipf.o 00:03:34.630 LINK spdk_trace_record 00:03:34.630 LINK ioat_perf 00:03:34.630 LINK verify 00:03:34.630 LINK vtophys 00:03:34.630 LINK env_dpdk_post_init 00:03:34.630 LINK stub 00:03:34.630 LINK bdev_svc 00:03:34.630 LINK spdk_dd 00:03:34.889 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:34.889 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:34.889 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:34.889 LINK spdk_trace 00:03:34.889 LINK mem_callbacks 00:03:34.889 LINK pci_ut 00:03:35.147 CC test/event/reactor/reactor.o 00:03:35.147 CC test/event/reactor_perf/reactor_perf.o 00:03:35.147 CC test/event/event_perf/event_perf.o 00:03:35.147 CC examples/vmd/lsvmd/lsvmd.o 00:03:35.147 LINK test_dma 00:03:35.147 CC examples/sock/hello_world/hello_sock.o 00:03:35.147 CC examples/vmd/led/led.o 00:03:35.147 CC test/event/app_repeat/app_repeat.o 00:03:35.147 CC examples/idxd/perf/perf.o 00:03:35.147 CC test/event/scheduler/scheduler.o 00:03:35.147 CC examples/thread/thread/thread_ex.o 00:03:35.147 LINK spdk_top 00:03:35.147 LINK nvme_fuzz 00:03:35.147 LINK spdk_bdev 00:03:35.147 LINK spdk_nvme 00:03:35.147 LINK spdk_nvme_perf 00:03:35.147 LINK lsvmd 00:03:35.147 LINK reactor 00:03:35.147 LINK spdk_nvme_identify 00:03:35.147 LINK event_perf 00:03:35.147 LINK led 00:03:35.147 LINK vhost_fuzz 00:03:35.147 LINK reactor_perf 00:03:35.406 LINK app_repeat 00:03:35.406 LINK hello_sock 00:03:35.406 CC app/vhost/vhost.o 00:03:35.406 LINK scheduler 00:03:35.406 LINK idxd_perf 00:03:35.406 LINK memory_ut 00:03:35.406 LINK thread 00:03:35.664 CC test/nvme/sgl/sgl.o 00:03:35.664 CC test/nvme/aer/aer.o 00:03:35.664 CC test/nvme/e2edp/nvme_dp.o 00:03:35.664 CC test/nvme/boot_partition/boot_partition.o 00:03:35.664 CC test/nvme/fdp/fdp.o 00:03:35.664 CC test/nvme/simple_copy/simple_copy.o 00:03:35.664 CC test/nvme/startup/startup.o 00:03:35.664 CC test/nvme/reserve/reserve.o 00:03:35.664 CC test/nvme/reset/reset.o 00:03:35.664 CC test/nvme/connect_stress/connect_stress.o 00:03:35.664 CC test/nvme/err_injection/err_injection.o 00:03:35.664 CC test/nvme/overhead/overhead.o 00:03:35.664 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:35.664 CC test/nvme/fused_ordering/fused_ordering.o 00:03:35.664 CC test/nvme/compliance/nvme_compliance.o 00:03:35.664 CC test/nvme/cuse/cuse.o 00:03:35.664 LINK vhost 00:03:35.664 CC test/blobfs/mkfs/mkfs.o 00:03:35.664 CC test/accel/dif/dif.o 00:03:35.664 CC test/lvol/esnap/esnap.o 00:03:35.664 CC examples/nvme/hello_world/hello_world.o 00:03:35.664 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:35.664 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:35.664 CC examples/nvme/arbitration/arbitration.o 00:03:35.664 CC examples/nvme/reconnect/reconnect.o 00:03:35.664 CC examples/nvme/hotplug/hotplug.o 00:03:35.664 CC examples/nvme/abort/abort.o 00:03:35.664 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:35.664 LINK startup 00:03:35.664 LINK boot_partition 00:03:35.664 LINK err_injection 00:03:35.664 LINK connect_stress 00:03:35.922 LINK doorbell_aers 00:03:35.922 LINK reserve 00:03:35.922 LINK fused_ordering 00:03:35.922 LINK simple_copy 00:03:35.922 LINK reset 00:03:35.922 LINK mkfs 00:03:35.922 LINK nvme_dp 00:03:35.922 LINK aer 00:03:35.922 LINK sgl 00:03:35.922 LINK overhead 00:03:35.922 LINK fdp 00:03:35.922 LINK nvme_compliance 00:03:35.922 CC examples/accel/perf/accel_perf.o 00:03:35.922 CC examples/blob/cli/blobcli.o 00:03:35.922 LINK pmr_persistence 00:03:35.922 CC examples/blob/hello_world/hello_blob.o 00:03:35.922 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:35.922 LINK cmb_copy 00:03:35.922 LINK hello_world 00:03:35.922 LINK hotplug 00:03:36.180 LINK arbitration 00:03:36.180 LINK reconnect 00:03:36.180 LINK abort 00:03:36.180 LINK nvme_manage 00:03:36.180 LINK hello_blob 00:03:36.180 LINK hello_fsdev 00:03:36.180 LINK dif 00:03:36.180 LINK iscsi_fuzz 00:03:36.438 LINK accel_perf 00:03:36.438 LINK blobcli 00:03:36.708 LINK cuse 00:03:36.708 CC test/bdev/bdevio/bdevio.o 00:03:36.708 CC examples/bdev/bdevperf/bdevperf.o 00:03:36.708 CC examples/bdev/hello_world/hello_bdev.o 00:03:36.969 LINK hello_bdev 00:03:36.969 LINK bdevio 00:03:37.535 LINK bdevperf 00:03:37.792 CC examples/nvmf/nvmf/nvmf.o 00:03:38.050 LINK nvmf 00:03:39.425 LINK esnap 00:03:39.425 00:03:39.425 real 0m53.285s 00:03:39.425 user 6m44.025s 00:03:39.425 sys 2m50.367s 00:03:39.425 10:58:36 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:39.425 10:58:36 make -- common/autotest_common.sh@10 -- $ set +x 00:03:39.425 ************************************ 00:03:39.425 END TEST make 00:03:39.425 ************************************ 00:03:39.425 10:58:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:39.425 10:58:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:39.425 10:58:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:39.425 10:58:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.425 10:58:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:39.425 10:58:36 -- pm/common@44 -- $ pid=1750259 00:03:39.425 10:58:36 -- pm/common@50 -- $ kill -TERM 1750259 00:03:39.425 10:58:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.425 10:58:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:39.425 10:58:36 -- pm/common@44 -- $ pid=1750261 00:03:39.425 10:58:36 -- pm/common@50 -- $ kill -TERM 1750261 00:03:39.425 10:58:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.425 10:58:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:39.425 10:58:36 -- pm/common@44 -- $ pid=1750262 00:03:39.425 10:58:36 -- pm/common@50 -- $ kill -TERM 1750262 00:03:39.425 10:58:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.425 10:58:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:39.425 10:58:36 -- pm/common@44 -- $ pid=1750285 00:03:39.425 10:58:36 -- pm/common@50 -- $ sudo -E kill -TERM 1750285 00:03:39.685 10:58:37 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:39.685 10:58:37 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:39.685 10:58:37 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:39.685 10:58:37 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:39.685 10:58:37 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.685 10:58:37 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.685 10:58:37 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.685 10:58:37 -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.685 10:58:37 -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.685 10:58:37 -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.685 10:58:37 -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.685 10:58:37 -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.685 10:58:37 -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.685 10:58:37 -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.685 10:58:37 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.685 10:58:37 -- scripts/common.sh@344 -- # case "$op" in 00:03:39.685 10:58:37 -- scripts/common.sh@345 -- # : 1 00:03:39.685 10:58:37 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.685 10:58:37 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.685 10:58:37 -- scripts/common.sh@365 -- # decimal 1 00:03:39.685 10:58:37 -- scripts/common.sh@353 -- # local d=1 00:03:39.685 10:58:37 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.685 10:58:37 -- scripts/common.sh@355 -- # echo 1 00:03:39.685 10:58:37 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.685 10:58:37 -- scripts/common.sh@366 -- # decimal 2 00:03:39.685 10:58:37 -- scripts/common.sh@353 -- # local d=2 00:03:39.685 10:58:37 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.685 10:58:37 -- scripts/common.sh@355 -- # echo 2 00:03:39.685 10:58:37 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.685 10:58:37 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.685 10:58:37 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.685 10:58:37 -- scripts/common.sh@368 -- # return 0 00:03:39.685 10:58:37 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.685 10:58:37 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:39.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.685 --rc genhtml_branch_coverage=1 00:03:39.685 --rc genhtml_function_coverage=1 00:03:39.685 --rc genhtml_legend=1 00:03:39.685 --rc geninfo_all_blocks=1 00:03:39.685 --rc geninfo_unexecuted_blocks=1 00:03:39.685 00:03:39.685 ' 00:03:39.685 10:58:37 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:39.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.685 --rc genhtml_branch_coverage=1 00:03:39.685 --rc genhtml_function_coverage=1 00:03:39.685 --rc genhtml_legend=1 00:03:39.685 --rc geninfo_all_blocks=1 00:03:39.685 --rc geninfo_unexecuted_blocks=1 00:03:39.685 00:03:39.685 ' 00:03:39.685 10:58:37 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:39.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.685 --rc genhtml_branch_coverage=1 00:03:39.685 --rc genhtml_function_coverage=1 00:03:39.685 --rc genhtml_legend=1 00:03:39.685 --rc geninfo_all_blocks=1 00:03:39.685 --rc geninfo_unexecuted_blocks=1 00:03:39.685 00:03:39.685 ' 00:03:39.685 10:58:37 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:39.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.685 --rc genhtml_branch_coverage=1 00:03:39.685 --rc genhtml_function_coverage=1 00:03:39.685 --rc genhtml_legend=1 00:03:39.685 --rc geninfo_all_blocks=1 00:03:39.685 --rc geninfo_unexecuted_blocks=1 00:03:39.685 00:03:39.685 ' 00:03:39.685 10:58:37 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:39.685 10:58:37 -- nvmf/common.sh@7 -- # uname -s 00:03:39.685 10:58:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:39.685 10:58:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:39.685 10:58:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:39.685 10:58:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:39.685 10:58:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:39.685 10:58:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:39.685 10:58:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:39.685 10:58:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:39.685 10:58:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:39.685 10:58:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:39.685 10:58:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:39.685 10:58:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:39.685 10:58:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:39.685 10:58:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:39.685 10:58:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:39.685 10:58:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:39.685 10:58:37 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:39.685 10:58:37 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:39.685 10:58:37 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:39.685 10:58:37 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:39.685 10:58:37 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:39.685 10:58:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.685 10:58:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.685 10:58:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.685 10:58:37 -- paths/export.sh@5 -- # export PATH 00:03:39.685 10:58:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.685 10:58:37 -- nvmf/common.sh@51 -- # : 0 00:03:39.685 10:58:37 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:39.685 10:58:37 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:39.685 10:58:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:39.685 10:58:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:39.685 10:58:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:39.685 10:58:37 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:39.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:39.685 10:58:37 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:39.685 10:58:37 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:39.685 10:58:37 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:39.685 10:58:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:39.685 10:58:37 -- spdk/autotest.sh@32 -- # uname -s 00:03:39.685 10:58:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:39.685 10:58:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:39.685 10:58:37 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:39.685 10:58:37 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:39.685 10:58:37 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:39.685 10:58:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:39.685 10:58:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:39.685 10:58:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:39.685 10:58:37 -- spdk/autotest.sh@48 -- # udevadm_pid=1827917 00:03:39.685 10:58:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:39.685 10:58:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:39.685 10:58:37 -- pm/common@17 -- # local monitor 00:03:39.685 10:58:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.685 10:58:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.685 10:58:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.685 10:58:37 -- pm/common@21 -- # date +%s 00:03:39.685 10:58:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.685 10:58:37 -- pm/common@21 -- # date +%s 00:03:39.686 10:58:37 -- pm/common@21 -- # date +%s 00:03:39.686 10:58:37 -- pm/common@25 -- # sleep 1 00:03:39.686 10:58:37 -- pm/common@21 -- # date +%s 00:03:39.686 10:58:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728205117 00:03:39.686 10:58:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728205117 00:03:39.686 10:58:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728205117 00:03:39.686 10:58:37 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728205117 00:03:39.686 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728205117_collect-cpu-temp.pm.log 00:03:39.944 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728205117_collect-vmstat.pm.log 00:03:39.944 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728205117_collect-cpu-load.pm.log 00:03:39.944 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728205117_collect-bmc-pm.bmc.pm.log 00:03:40.884 10:58:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:40.884 10:58:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:40.884 10:58:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:40.884 10:58:38 -- common/autotest_common.sh@10 -- # set +x 00:03:40.884 10:58:38 -- spdk/autotest.sh@59 -- # create_test_list 00:03:40.884 10:58:38 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:40.884 10:58:38 -- common/autotest_common.sh@10 -- # set +x 00:03:40.884 10:58:38 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:40.884 10:58:38 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:40.884 10:58:38 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:40.884 10:58:38 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:40.884 10:58:38 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:40.884 10:58:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:40.884 10:58:38 -- common/autotest_common.sh@1455 -- # uname 00:03:40.884 10:58:38 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:40.884 10:58:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:40.884 10:58:38 -- common/autotest_common.sh@1475 -- # uname 00:03:40.884 10:58:38 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:40.884 10:58:38 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:40.884 10:58:38 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:40.884 lcov: LCOV version 1.15 00:03:40.884 10:58:38 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:53.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:53.086 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:07.961 10:59:02 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:07.961 10:59:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:07.961 10:59:02 -- common/autotest_common.sh@10 -- # set +x 00:04:07.961 10:59:02 -- spdk/autotest.sh@78 -- # rm -f 00:04:07.961 10:59:02 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.961 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:07.961 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:07.961 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:08.221 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:08.221 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:08.221 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:08.221 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:08.221 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:08.221 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:08.221 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:08.221 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:08.221 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:08.221 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:08.221 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:08.221 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:08.479 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:08.479 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:08.479 10:59:05 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:08.479 10:59:05 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:08.479 10:59:05 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:08.479 10:59:05 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:08.479 10:59:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:08.479 10:59:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:08.479 10:59:05 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:08.479 10:59:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:08.479 10:59:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:08.479 10:59:05 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:08.479 10:59:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.479 10:59:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.479 10:59:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:08.479 10:59:05 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:08.479 10:59:05 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:08.479 No valid GPT data, bailing 00:04:08.479 10:59:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:08.479 10:59:05 -- scripts/common.sh@394 -- # pt= 00:04:08.479 10:59:05 -- scripts/common.sh@395 -- # return 1 00:04:08.479 10:59:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:08.479 1+0 records in 00:04:08.479 1+0 records out 00:04:08.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00177475 s, 591 MB/s 00:04:08.479 10:59:05 -- spdk/autotest.sh@105 -- # sync 00:04:08.479 10:59:05 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:08.479 10:59:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:08.479 10:59:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:13.745 10:59:10 -- spdk/autotest.sh@111 -- # uname -s 00:04:13.745 10:59:10 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:13.745 10:59:10 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:13.745 10:59:10 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:15.649 Hugepages 00:04:15.649 node hugesize free / total 00:04:15.649 node0 1048576kB 0 / 0 00:04:15.649 node0 2048kB 0 / 0 00:04:15.649 node1 1048576kB 0 / 0 00:04:15.907 node1 2048kB 0 / 0 00:04:15.907 00:04:15.907 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:15.907 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:15.907 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:15.907 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:15.907 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:15.907 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:15.907 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:15.907 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:15.908 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:15.908 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:15.908 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:15.908 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:15.908 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:15.908 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:15.908 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:15.908 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:15.908 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:15.908 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:15.908 10:59:13 -- spdk/autotest.sh@117 -- # uname -s 00:04:15.908 10:59:13 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:15.908 10:59:13 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:15.908 10:59:13 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:18.438 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:18.438 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:18.438 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:18.438 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:18.438 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:18.438 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:18.438 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:18.438 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:18.438 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:18.438 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:18.438 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:18.438 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:18.438 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:18.438 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:18.438 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:18.438 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:19.007 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:19.007 10:59:16 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:20.386 10:59:17 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:20.386 10:59:17 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:20.386 10:59:17 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:20.386 10:59:17 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:20.386 10:59:17 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:20.386 10:59:17 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:20.386 10:59:17 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.386 10:59:17 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:20.386 10:59:17 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:20.386 10:59:17 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:20.386 10:59:17 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:20.386 10:59:17 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.923 Waiting for block devices as requested 00:04:22.923 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:22.923 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:22.923 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:22.923 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:22.923 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:22.923 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:23.182 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:23.182 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:23.182 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:23.182 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:23.441 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:23.441 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:23.441 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:23.700 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:23.700 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:23.700 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:23.958 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:23.958 10:59:21 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:23.958 10:59:21 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:23.958 10:59:21 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:23.958 10:59:21 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:04:23.958 10:59:21 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:23.958 10:59:21 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:23.958 10:59:21 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:23.958 10:59:21 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:23.958 10:59:21 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:23.958 10:59:21 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:23.958 10:59:21 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:23.958 10:59:21 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:23.958 10:59:21 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:23.958 10:59:21 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:04:23.958 10:59:21 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:23.958 10:59:21 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:23.958 10:59:21 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:23.958 10:59:21 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:23.958 10:59:21 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:23.958 10:59:21 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:23.958 10:59:21 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:23.958 10:59:21 -- common/autotest_common.sh@1541 -- # continue 00:04:23.958 10:59:21 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:23.958 10:59:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:23.958 10:59:21 -- common/autotest_common.sh@10 -- # set +x 00:04:23.958 10:59:21 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:23.958 10:59:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:23.958 10:59:21 -- common/autotest_common.sh@10 -- # set +x 00:04:23.958 10:59:21 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:27.243 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.243 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.243 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.243 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.243 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.243 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.243 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.243 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:27.243 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.243 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.243 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.243 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.243 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.243 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.243 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.243 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:27.502 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:27.759 10:59:25 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:27.760 10:59:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.760 10:59:25 -- common/autotest_common.sh@10 -- # set +x 00:04:27.760 10:59:25 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:27.760 10:59:25 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:27.760 10:59:25 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:27.760 10:59:25 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:27.760 10:59:25 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:27.760 10:59:25 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:27.760 10:59:25 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:27.760 10:59:25 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:27.760 10:59:25 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:27.760 10:59:25 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:27.760 10:59:25 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:27.760 10:59:25 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:27.760 10:59:25 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:27.760 10:59:25 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:27.760 10:59:25 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:27.760 10:59:25 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:27.760 10:59:25 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:27.760 10:59:25 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:27.760 10:59:25 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:27.760 10:59:25 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:27.760 10:59:25 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:27.760 10:59:25 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:04:27.760 10:59:25 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:04:27.760 10:59:25 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1841668 00:04:27.760 10:59:25 -- common/autotest_common.sh@1583 -- # waitforlisten 1841668 00:04:27.760 10:59:25 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.760 10:59:25 -- common/autotest_common.sh@831 -- # '[' -z 1841668 ']' 00:04:27.760 10:59:25 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.760 10:59:25 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:27.760 10:59:25 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.760 10:59:25 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:27.760 10:59:25 -- common/autotest_common.sh@10 -- # set +x 00:04:28.018 [2024-10-06 10:59:25.363595] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:04:28.018 [2024-10-06 10:59:25.363645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1841668 ] 00:04:28.018 [2024-10-06 10:59:25.419474] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.018 [2024-10-06 10:59:25.458752] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.300 10:59:25 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:28.300 10:59:25 -- common/autotest_common.sh@864 -- # return 0 00:04:28.300 10:59:25 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:28.300 10:59:25 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:28.300 10:59:25 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:31.667 nvme0n1 00:04:31.667 10:59:28 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:31.667 [2024-10-06 10:59:28.823620] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:31.667 [2024-10-06 10:59:28.823651] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:31.667 request: 00:04:31.667 { 00:04:31.667 "nvme_ctrlr_name": "nvme0", 00:04:31.667 "password": "test", 00:04:31.667 "method": "bdev_nvme_opal_revert", 00:04:31.667 "req_id": 1 00:04:31.667 } 00:04:31.667 Got JSON-RPC error response 00:04:31.667 response: 00:04:31.667 { 00:04:31.667 "code": -32603, 00:04:31.667 "message": "Internal error" 00:04:31.667 } 00:04:31.667 10:59:28 -- common/autotest_common.sh@1589 -- # true 00:04:31.667 10:59:28 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:31.667 10:59:28 -- common/autotest_common.sh@1593 -- # killprocess 1841668 00:04:31.667 10:59:28 -- common/autotest_common.sh@950 -- # '[' -z 1841668 ']' 00:04:31.667 10:59:28 -- common/autotest_common.sh@954 -- # kill -0 1841668 00:04:31.667 10:59:28 -- common/autotest_common.sh@955 -- # uname 00:04:31.667 10:59:28 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:31.667 10:59:28 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1841668 00:04:31.667 10:59:28 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:31.667 10:59:28 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:31.667 10:59:28 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1841668' 00:04:31.667 killing process with pid 1841668 00:04:31.667 10:59:28 -- common/autotest_common.sh@969 -- # kill 1841668 00:04:31.667 10:59:28 -- common/autotest_common.sh@974 -- # wait 1841668 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.667 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:31.668 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:33.049 10:59:30 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:33.049 10:59:30 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:33.049 10:59:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:33.049 10:59:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:33.049 10:59:30 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:33.049 10:59:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:33.049 10:59:30 -- common/autotest_common.sh@10 -- # set +x 00:04:33.049 10:59:30 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:33.049 10:59:30 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:33.049 10:59:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.049 10:59:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.049 10:59:30 -- common/autotest_common.sh@10 -- # set +x 00:04:33.049 ************************************ 00:04:33.049 START TEST env 00:04:33.049 ************************************ 00:04:33.049 10:59:30 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:33.309 * Looking for test storage... 00:04:33.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:33.309 10:59:30 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:33.309 10:59:30 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:33.309 10:59:30 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:33.309 10:59:30 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:33.309 10:59:30 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.309 10:59:30 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.309 10:59:30 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.309 10:59:30 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.309 10:59:30 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.309 10:59:30 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.309 10:59:30 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.309 10:59:30 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.309 10:59:30 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.309 10:59:30 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.309 10:59:30 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.309 10:59:30 env -- scripts/common.sh@344 -- # case "$op" in 00:04:33.309 10:59:30 env -- scripts/common.sh@345 -- # : 1 00:04:33.309 10:59:30 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.309 10:59:30 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.309 10:59:30 env -- scripts/common.sh@365 -- # decimal 1 00:04:33.309 10:59:30 env -- scripts/common.sh@353 -- # local d=1 00:04:33.309 10:59:30 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.309 10:59:30 env -- scripts/common.sh@355 -- # echo 1 00:04:33.309 10:59:30 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.309 10:59:30 env -- scripts/common.sh@366 -- # decimal 2 00:04:33.309 10:59:30 env -- scripts/common.sh@353 -- # local d=2 00:04:33.309 10:59:30 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.309 10:59:30 env -- scripts/common.sh@355 -- # echo 2 00:04:33.309 10:59:30 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.309 10:59:30 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.309 10:59:30 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.309 10:59:30 env -- scripts/common.sh@368 -- # return 0 00:04:33.309 10:59:30 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.309 10:59:30 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:33.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.309 --rc genhtml_branch_coverage=1 00:04:33.309 --rc genhtml_function_coverage=1 00:04:33.309 --rc genhtml_legend=1 00:04:33.309 --rc geninfo_all_blocks=1 00:04:33.309 --rc geninfo_unexecuted_blocks=1 00:04:33.309 00:04:33.309 ' 00:04:33.309 10:59:30 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:33.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.309 --rc genhtml_branch_coverage=1 00:04:33.309 --rc genhtml_function_coverage=1 00:04:33.309 --rc genhtml_legend=1 00:04:33.309 --rc geninfo_all_blocks=1 00:04:33.309 --rc geninfo_unexecuted_blocks=1 00:04:33.309 00:04:33.309 ' 00:04:33.309 10:59:30 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:33.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.309 --rc genhtml_branch_coverage=1 00:04:33.309 --rc genhtml_function_coverage=1 00:04:33.309 --rc genhtml_legend=1 00:04:33.309 --rc geninfo_all_blocks=1 00:04:33.309 --rc geninfo_unexecuted_blocks=1 00:04:33.309 00:04:33.309 ' 00:04:33.309 10:59:30 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:33.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.309 --rc genhtml_branch_coverage=1 00:04:33.309 --rc genhtml_function_coverage=1 00:04:33.309 --rc genhtml_legend=1 00:04:33.309 --rc geninfo_all_blocks=1 00:04:33.309 --rc geninfo_unexecuted_blocks=1 00:04:33.309 00:04:33.309 ' 00:04:33.309 10:59:30 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:33.309 10:59:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.309 10:59:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.309 10:59:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.309 ************************************ 00:04:33.309 START TEST env_memory 00:04:33.309 ************************************ 00:04:33.309 10:59:30 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:33.309 00:04:33.309 00:04:33.309 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.309 http://cunit.sourceforge.net/ 00:04:33.309 00:04:33.309 00:04:33.309 Suite: memory 00:04:33.309 Test: alloc and free memory map ...[2024-10-06 10:59:30.808698] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:33.309 passed 00:04:33.309 Test: mem map translation ...[2024-10-06 10:59:30.826397] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:33.309 [2024-10-06 10:59:30.826410] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:33.309 [2024-10-06 10:59:30.826443] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:33.309 [2024-10-06 10:59:30.826449] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:33.309 passed 00:04:33.309 Test: mem map registration ...[2024-10-06 10:59:30.862650] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:33.309 [2024-10-06 10:59:30.862664] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:33.309 passed 00:04:33.570 Test: mem map adjacent registrations ...passed 00:04:33.570 00:04:33.570 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.570 suites 1 1 n/a 0 0 00:04:33.570 tests 4 4 4 0 0 00:04:33.570 asserts 152 152 152 0 n/a 00:04:33.570 00:04:33.570 Elapsed time = 0.137 seconds 00:04:33.570 00:04:33.570 real 0m0.150s 00:04:33.570 user 0m0.141s 00:04:33.570 sys 0m0.009s 00:04:33.570 10:59:30 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.570 10:59:30 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:33.570 ************************************ 00:04:33.570 END TEST env_memory 00:04:33.570 ************************************ 00:04:33.570 10:59:30 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:33.570 10:59:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.570 10:59:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.570 10:59:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.570 ************************************ 00:04:33.570 START TEST env_vtophys 00:04:33.570 ************************************ 00:04:33.570 10:59:30 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:33.570 EAL: lib.eal log level changed from notice to debug 00:04:33.570 EAL: Detected lcore 0 as core 0 on socket 0 00:04:33.570 EAL: Detected lcore 1 as core 1 on socket 0 00:04:33.570 EAL: Detected lcore 2 as core 2 on socket 0 00:04:33.570 EAL: Detected lcore 3 as core 3 on socket 0 00:04:33.570 EAL: Detected lcore 4 as core 4 on socket 0 00:04:33.570 EAL: Detected lcore 5 as core 5 on socket 0 00:04:33.570 EAL: Detected lcore 6 as core 6 on socket 0 00:04:33.570 EAL: Detected lcore 7 as core 8 on socket 0 00:04:33.570 EAL: Detected lcore 8 as core 9 on socket 0 00:04:33.570 EAL: Detected lcore 9 as core 10 on socket 0 00:04:33.570 EAL: Detected lcore 10 as core 11 on socket 0 00:04:33.570 EAL: Detected lcore 11 as core 12 on socket 0 00:04:33.570 EAL: Detected lcore 12 as core 13 on socket 0 00:04:33.570 EAL: Detected lcore 13 as core 16 on socket 0 00:04:33.570 EAL: Detected lcore 14 as core 17 on socket 0 00:04:33.570 EAL: Detected lcore 15 as core 18 on socket 0 00:04:33.570 EAL: Detected lcore 16 as core 19 on socket 0 00:04:33.570 EAL: Detected lcore 17 as core 20 on socket 0 00:04:33.570 EAL: Detected lcore 18 as core 21 on socket 0 00:04:33.570 EAL: Detected lcore 19 as core 25 on socket 0 00:04:33.570 EAL: Detected lcore 20 as core 26 on socket 0 00:04:33.570 EAL: Detected lcore 21 as core 27 on socket 0 00:04:33.570 EAL: Detected lcore 22 as core 28 on socket 0 00:04:33.570 EAL: Detected lcore 23 as core 29 on socket 0 00:04:33.570 EAL: Detected lcore 24 as core 0 on socket 1 00:04:33.570 EAL: Detected lcore 25 as core 1 on socket 1 00:04:33.570 EAL: Detected lcore 26 as core 2 on socket 1 00:04:33.570 EAL: Detected lcore 27 as core 3 on socket 1 00:04:33.570 EAL: Detected lcore 28 as core 4 on socket 1 00:04:33.570 EAL: Detected lcore 29 as core 5 on socket 1 00:04:33.570 EAL: Detected lcore 30 as core 6 on socket 1 00:04:33.570 EAL: Detected lcore 31 as core 8 on socket 1 00:04:33.570 EAL: Detected lcore 32 as core 9 on socket 1 00:04:33.570 EAL: Detected lcore 33 as core 10 on socket 1 00:04:33.570 EAL: Detected lcore 34 as core 11 on socket 1 00:04:33.570 EAL: Detected lcore 35 as core 12 on socket 1 00:04:33.570 EAL: Detected lcore 36 as core 13 on socket 1 00:04:33.570 EAL: Detected lcore 37 as core 16 on socket 1 00:04:33.570 EAL: Detected lcore 38 as core 17 on socket 1 00:04:33.570 EAL: Detected lcore 39 as core 18 on socket 1 00:04:33.570 EAL: Detected lcore 40 as core 19 on socket 1 00:04:33.570 EAL: Detected lcore 41 as core 20 on socket 1 00:04:33.570 EAL: Detected lcore 42 as core 21 on socket 1 00:04:33.570 EAL: Detected lcore 43 as core 25 on socket 1 00:04:33.570 EAL: Detected lcore 44 as core 26 on socket 1 00:04:33.570 EAL: Detected lcore 45 as core 27 on socket 1 00:04:33.570 EAL: Detected lcore 46 as core 28 on socket 1 00:04:33.570 EAL: Detected lcore 47 as core 29 on socket 1 00:04:33.570 EAL: Detected lcore 48 as core 0 on socket 0 00:04:33.570 EAL: Detected lcore 49 as core 1 on socket 0 00:04:33.570 EAL: Detected lcore 50 as core 2 on socket 0 00:04:33.570 EAL: Detected lcore 51 as core 3 on socket 0 00:04:33.570 EAL: Detected lcore 52 as core 4 on socket 0 00:04:33.570 EAL: Detected lcore 53 as core 5 on socket 0 00:04:33.570 EAL: Detected lcore 54 as core 6 on socket 0 00:04:33.570 EAL: Detected lcore 55 as core 8 on socket 0 00:04:33.570 EAL: Detected lcore 56 as core 9 on socket 0 00:04:33.570 EAL: Detected lcore 57 as core 10 on socket 0 00:04:33.570 EAL: Detected lcore 58 as core 11 on socket 0 00:04:33.570 EAL: Detected lcore 59 as core 12 on socket 0 00:04:33.570 EAL: Detected lcore 60 as core 13 on socket 0 00:04:33.570 EAL: Detected lcore 61 as core 16 on socket 0 00:04:33.570 EAL: Detected lcore 62 as core 17 on socket 0 00:04:33.570 EAL: Detected lcore 63 as core 18 on socket 0 00:04:33.570 EAL: Detected lcore 64 as core 19 on socket 0 00:04:33.570 EAL: Detected lcore 65 as core 20 on socket 0 00:04:33.570 EAL: Detected lcore 66 as core 21 on socket 0 00:04:33.570 EAL: Detected lcore 67 as core 25 on socket 0 00:04:33.570 EAL: Detected lcore 68 as core 26 on socket 0 00:04:33.570 EAL: Detected lcore 69 as core 27 on socket 0 00:04:33.570 EAL: Detected lcore 70 as core 28 on socket 0 00:04:33.570 EAL: Detected lcore 71 as core 29 on socket 0 00:04:33.570 EAL: Detected lcore 72 as core 0 on socket 1 00:04:33.570 EAL: Detected lcore 73 as core 1 on socket 1 00:04:33.570 EAL: Detected lcore 74 as core 2 on socket 1 00:04:33.570 EAL: Detected lcore 75 as core 3 on socket 1 00:04:33.570 EAL: Detected lcore 76 as core 4 on socket 1 00:04:33.570 EAL: Detected lcore 77 as core 5 on socket 1 00:04:33.570 EAL: Detected lcore 78 as core 6 on socket 1 00:04:33.570 EAL: Detected lcore 79 as core 8 on socket 1 00:04:33.570 EAL: Detected lcore 80 as core 9 on socket 1 00:04:33.570 EAL: Detected lcore 81 as core 10 on socket 1 00:04:33.570 EAL: Detected lcore 82 as core 11 on socket 1 00:04:33.570 EAL: Detected lcore 83 as core 12 on socket 1 00:04:33.570 EAL: Detected lcore 84 as core 13 on socket 1 00:04:33.570 EAL: Detected lcore 85 as core 16 on socket 1 00:04:33.570 EAL: Detected lcore 86 as core 17 on socket 1 00:04:33.570 EAL: Detected lcore 87 as core 18 on socket 1 00:04:33.570 EAL: Detected lcore 88 as core 19 on socket 1 00:04:33.570 EAL: Detected lcore 89 as core 20 on socket 1 00:04:33.570 EAL: Detected lcore 90 as core 21 on socket 1 00:04:33.570 EAL: Detected lcore 91 as core 25 on socket 1 00:04:33.570 EAL: Detected lcore 92 as core 26 on socket 1 00:04:33.570 EAL: Detected lcore 93 as core 27 on socket 1 00:04:33.570 EAL: Detected lcore 94 as core 28 on socket 1 00:04:33.570 EAL: Detected lcore 95 as core 29 on socket 1 00:04:33.570 EAL: Maximum logical cores by configuration: 128 00:04:33.570 EAL: Detected CPU lcores: 96 00:04:33.570 EAL: Detected NUMA nodes: 2 00:04:33.570 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:04:33.570 EAL: Detected shared linkage of DPDK 00:04:33.570 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:04:33.570 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:04:33.570 EAL: Registered [vdev] bus. 00:04:33.570 EAL: bus.vdev log level changed from disabled to notice 00:04:33.570 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:04:33.570 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:04:33.570 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:33.570 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:33.570 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:04:33.570 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:04:33.570 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:04:33.570 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:04:33.570 EAL: No shared files mode enabled, IPC will be disabled 00:04:33.570 EAL: No shared files mode enabled, IPC is disabled 00:04:33.570 EAL: Bus pci wants IOVA as 'DC' 00:04:33.570 EAL: Bus vdev wants IOVA as 'DC' 00:04:33.570 EAL: Buses did not request a specific IOVA mode. 00:04:33.570 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:33.570 EAL: Selected IOVA mode 'VA' 00:04:33.570 EAL: Probing VFIO support... 00:04:33.570 EAL: IOMMU type 1 (Type 1) is supported 00:04:33.570 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:33.570 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:33.570 EAL: VFIO support initialized 00:04:33.570 EAL: Ask a virtual area of 0x2e000 bytes 00:04:33.570 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:33.570 EAL: Setting up physically contiguous memory... 00:04:33.570 EAL: Setting maximum number of open files to 524288 00:04:33.570 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:33.570 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:33.570 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:33.570 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.570 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:33.570 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.570 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.570 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:33.570 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:33.570 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.570 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:33.570 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.570 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.570 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:33.570 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:33.570 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.570 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:33.571 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.571 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.571 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:33.571 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:33.571 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.571 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:33.571 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.571 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.571 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:33.571 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:33.571 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:33.571 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.571 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:33.571 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:33.571 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.571 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:33.571 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:33.571 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.571 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:33.571 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:33.571 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.571 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:33.571 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:33.571 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.571 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:33.571 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:33.571 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.571 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:33.571 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:33.571 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.571 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:33.571 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:33.571 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.571 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:33.571 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:33.571 EAL: Hugepages will be freed exactly as allocated. 00:04:33.571 EAL: No shared files mode enabled, IPC is disabled 00:04:33.571 EAL: No shared files mode enabled, IPC is disabled 00:04:33.571 EAL: TSC frequency is ~2100000 KHz 00:04:33.571 EAL: Main lcore 0 is ready (tid=7f37e2750a00;cpuset=[0]) 00:04:33.571 EAL: Trying to obtain current memory policy. 00:04:33.571 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.571 EAL: Restoring previous memory policy: 0 00:04:33.571 EAL: request: mp_malloc_sync 00:04:33.571 EAL: No shared files mode enabled, IPC is disabled 00:04:33.571 EAL: Heap on socket 0 was expanded by 2MB 00:04:33.571 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:04:33.571 EAL: probe driver: 8086:37d2 net_i40e 00:04:33.571 EAL: Not managed by a supported kernel driver, skipped 00:04:33.571 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:04:33.571 EAL: probe driver: 8086:37d2 net_i40e 00:04:33.571 EAL: Not managed by a supported kernel driver, skipped 00:04:33.571 EAL: No shared files mode enabled, IPC is disabled 00:04:33.571 EAL: No shared files mode enabled, IPC is disabled 00:04:33.571 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:33.571 EAL: Mem event callback 'spdk:(nil)' registered 00:04:33.571 00:04:33.571 00:04:33.571 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.571 http://cunit.sourceforge.net/ 00:04:33.571 00:04:33.571 00:04:33.571 Suite: components_suite 00:04:33.571 Test: vtophys_malloc_test ...passed 00:04:33.571 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:33.571 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.571 EAL: Restoring previous memory policy: 4 00:04:33.571 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.571 EAL: request: mp_malloc_sync 00:04:33.571 EAL: No shared files mode enabled, IPC is disabled 00:04:33.571 EAL: Heap on socket 0 was expanded by 4MB 00:04:33.571 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.571 EAL: request: mp_malloc_sync 00:04:33.571 EAL: No shared files mode enabled, IPC is disabled 00:04:33.571 EAL: Heap on socket 0 was shrunk by 4MB 00:04:33.571 EAL: Trying to obtain current memory policy. 00:04:33.571 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.571 EAL: Restoring previous memory policy: 4 00:04:33.571 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.571 EAL: request: mp_malloc_sync 00:04:33.571 EAL: No shared files mode enabled, IPC is disabled 00:04:33.571 EAL: Heap on socket 0 was expanded by 6MB 00:04:33.571 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.571 EAL: request: mp_malloc_sync 00:04:33.571 EAL: No shared files mode enabled, IPC is disabled 00:04:33.571 EAL: Heap on socket 0 was shrunk by 6MB 00:04:33.571 EAL: Trying to obtain current memory policy. 00:04:33.571 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.571 EAL: Restoring previous memory policy: 4 00:04:33.571 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.571 EAL: request: mp_malloc_sync 00:04:33.571 EAL: No shared files mode enabled, IPC is disabled 00:04:33.571 EAL: Heap on socket 0 was expanded by 10MB 00:04:33.571 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.571 EAL: request: mp_malloc_sync 00:04:33.571 EAL: No shared files mode enabled, IPC is disabled 00:04:33.571 EAL: Heap on socket 0 was shrunk by 10MB 00:04:33.571 EAL: Trying to obtain current memory policy. 00:04:33.571 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.571 EAL: Restoring previous memory policy: 4 00:04:33.571 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.571 EAL: request: mp_malloc_sync 00:04:33.571 EAL: No shared files mode enabled, IPC is disabled 00:04:33.571 EAL: Heap on socket 0 was expanded by 18MB 00:04:33.571 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.571 EAL: request: mp_malloc_sync 00:04:33.571 EAL: No shared files mode enabled, IPC is disabled 00:04:33.571 EAL: Heap on socket 0 was shrunk by 18MB 00:04:33.571 EAL: Trying to obtain current memory policy. 00:04:33.571 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.571 EAL: Restoring previous memory policy: 4 00:04:33.571 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.571 EAL: request: mp_malloc_sync 00:04:33.571 EAL: No shared files mode enabled, IPC is disabled 00:04:33.571 EAL: Heap on socket 0 was expanded by 34MB 00:04:33.571 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.571 EAL: request: mp_malloc_sync 00:04:33.571 EAL: No shared files mode enabled, IPC is disabled 00:04:33.571 EAL: Heap on socket 0 was shrunk by 34MB 00:04:33.571 EAL: Trying to obtain current memory policy. 00:04:33.571 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.571 EAL: Restoring previous memory policy: 4 00:04:33.571 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.571 EAL: request: mp_malloc_sync 00:04:33.571 EAL: No shared files mode enabled, IPC is disabled 00:04:33.571 EAL: Heap on socket 0 was expanded by 66MB 00:04:33.571 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.571 EAL: request: mp_malloc_sync 00:04:33.571 EAL: No shared files mode enabled, IPC is disabled 00:04:33.571 EAL: Heap on socket 0 was shrunk by 66MB 00:04:33.571 EAL: Trying to obtain current memory policy. 00:04:33.571 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.571 EAL: Restoring previous memory policy: 4 00:04:33.571 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.571 EAL: request: mp_malloc_sync 00:04:33.571 EAL: No shared files mode enabled, IPC is disabled 00:04:33.571 EAL: Heap on socket 0 was expanded by 130MB 00:04:33.571 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.835 EAL: request: mp_malloc_sync 00:04:33.835 EAL: No shared files mode enabled, IPC is disabled 00:04:33.835 EAL: Heap on socket 0 was shrunk by 130MB 00:04:33.835 EAL: Trying to obtain current memory policy. 00:04:33.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.835 EAL: Restoring previous memory policy: 4 00:04:33.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.835 EAL: request: mp_malloc_sync 00:04:33.835 EAL: No shared files mode enabled, IPC is disabled 00:04:33.835 EAL: Heap on socket 0 was expanded by 258MB 00:04:33.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.835 EAL: request: mp_malloc_sync 00:04:33.835 EAL: No shared files mode enabled, IPC is disabled 00:04:33.835 EAL: Heap on socket 0 was shrunk by 258MB 00:04:33.835 EAL: Trying to obtain current memory policy. 00:04:33.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.835 EAL: Restoring previous memory policy: 4 00:04:33.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.835 EAL: request: mp_malloc_sync 00:04:33.835 EAL: No shared files mode enabled, IPC is disabled 00:04:33.835 EAL: Heap on socket 0 was expanded by 514MB 00:04:34.097 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.097 EAL: request: mp_malloc_sync 00:04:34.097 EAL: No shared files mode enabled, IPC is disabled 00:04:34.097 EAL: Heap on socket 0 was shrunk by 514MB 00:04:34.097 EAL: Trying to obtain current memory policy. 00:04:34.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.356 EAL: Restoring previous memory policy: 4 00:04:34.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.356 EAL: request: mp_malloc_sync 00:04:34.356 EAL: No shared files mode enabled, IPC is disabled 00:04:34.356 EAL: Heap on socket 0 was expanded by 1026MB 00:04:34.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.616 EAL: request: mp_malloc_sync 00:04:34.616 EAL: No shared files mode enabled, IPC is disabled 00:04:34.616 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:34.616 passed 00:04:34.616 00:04:34.616 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.616 suites 1 1 n/a 0 0 00:04:34.616 tests 2 2 2 0 0 00:04:34.616 asserts 497 497 497 0 n/a 00:04:34.616 00:04:34.616 Elapsed time = 0.961 seconds 00:04:34.616 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.616 EAL: request: mp_malloc_sync 00:04:34.616 EAL: No shared files mode enabled, IPC is disabled 00:04:34.616 EAL: Heap on socket 0 was shrunk by 2MB 00:04:34.616 EAL: No shared files mode enabled, IPC is disabled 00:04:34.616 EAL: No shared files mode enabled, IPC is disabled 00:04:34.616 EAL: No shared files mode enabled, IPC is disabled 00:04:34.616 00:04:34.616 real 0m1.068s 00:04:34.616 user 0m0.631s 00:04:34.616 sys 0m0.409s 00:04:34.616 10:59:32 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.616 10:59:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:34.616 ************************************ 00:04:34.616 END TEST env_vtophys 00:04:34.616 ************************************ 00:04:34.616 10:59:32 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:34.616 10:59:32 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.616 10:59:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.616 10:59:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.616 ************************************ 00:04:34.616 START TEST env_pci 00:04:34.616 ************************************ 00:04:34.616 10:59:32 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:34.616 00:04:34.616 00:04:34.616 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.616 http://cunit.sourceforge.net/ 00:04:34.616 00:04:34.616 00:04:34.616 Suite: pci 00:04:34.616 Test: pci_hook ...[2024-10-06 10:59:32.122136] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1842935 has claimed it 00:04:34.616 EAL: Cannot find device (10000:00:01.0) 00:04:34.616 EAL: Failed to attach device on primary process 00:04:34.616 passed 00:04:34.616 00:04:34.616 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.616 suites 1 1 n/a 0 0 00:04:34.616 tests 1 1 1 0 0 00:04:34.616 asserts 25 25 25 0 n/a 00:04:34.616 00:04:34.616 Elapsed time = 0.026 seconds 00:04:34.616 00:04:34.616 real 0m0.043s 00:04:34.616 user 0m0.016s 00:04:34.616 sys 0m0.027s 00:04:34.616 10:59:32 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.616 10:59:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:34.616 ************************************ 00:04:34.616 END TEST env_pci 00:04:34.616 ************************************ 00:04:34.616 10:59:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:34.616 10:59:32 env -- env/env.sh@15 -- # uname 00:04:34.616 10:59:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:34.616 10:59:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:34.616 10:59:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:34.616 10:59:32 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:34.616 10:59:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.616 10:59:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.876 ************************************ 00:04:34.876 START TEST env_dpdk_post_init 00:04:34.876 ************************************ 00:04:34.876 10:59:32 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:34.876 EAL: Detected CPU lcores: 96 00:04:34.876 EAL: Detected NUMA nodes: 2 00:04:34.876 EAL: Detected shared linkage of DPDK 00:04:34.876 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:34.876 EAL: Selected IOVA mode 'VA' 00:04:34.876 EAL: VFIO support initialized 00:04:34.876 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:34.876 EAL: Using IOMMU type 1 (Type 1) 00:04:34.876 EAL: Ignore mapping IO port bar(1) 00:04:34.876 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:34.876 EAL: Ignore mapping IO port bar(1) 00:04:34.876 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:34.876 EAL: Ignore mapping IO port bar(1) 00:04:34.876 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:34.876 EAL: Ignore mapping IO port bar(1) 00:04:34.876 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:34.876 EAL: Ignore mapping IO port bar(1) 00:04:34.876 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:34.876 EAL: Ignore mapping IO port bar(1) 00:04:34.876 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:34.876 EAL: Ignore mapping IO port bar(1) 00:04:34.876 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:34.876 EAL: Ignore mapping IO port bar(1) 00:04:34.876 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:35.815 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:35.815 EAL: Ignore mapping IO port bar(1) 00:04:35.815 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:35.815 EAL: Ignore mapping IO port bar(1) 00:04:35.815 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:35.815 EAL: Ignore mapping IO port bar(1) 00:04:35.815 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:35.815 EAL: Ignore mapping IO port bar(1) 00:04:35.815 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:35.815 EAL: Ignore mapping IO port bar(1) 00:04:35.815 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:35.815 EAL: Ignore mapping IO port bar(1) 00:04:35.815 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:35.815 EAL: Ignore mapping IO port bar(1) 00:04:35.815 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:35.815 EAL: Ignore mapping IO port bar(1) 00:04:35.815 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:39.108 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:39.108 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:39.108 Starting DPDK initialization... 00:04:39.108 Starting SPDK post initialization... 00:04:39.108 SPDK NVMe probe 00:04:39.108 Attaching to 0000:5e:00.0 00:04:39.108 Attached to 0000:5e:00.0 00:04:39.108 Cleaning up... 00:04:39.108 00:04:39.108 real 0m4.288s 00:04:39.108 user 0m3.226s 00:04:39.108 sys 0m0.132s 00:04:39.108 10:59:36 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.108 10:59:36 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.108 ************************************ 00:04:39.108 END TEST env_dpdk_post_init 00:04:39.108 ************************************ 00:04:39.108 10:59:36 env -- env/env.sh@26 -- # uname 00:04:39.108 10:59:36 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:39.108 10:59:36 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:39.108 10:59:36 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.108 10:59:36 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.108 10:59:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.108 ************************************ 00:04:39.108 START TEST env_mem_callbacks 00:04:39.108 ************************************ 00:04:39.108 10:59:36 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:39.108 EAL: Detected CPU lcores: 96 00:04:39.108 EAL: Detected NUMA nodes: 2 00:04:39.108 EAL: Detected shared linkage of DPDK 00:04:39.108 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:39.108 EAL: Selected IOVA mode 'VA' 00:04:39.108 EAL: VFIO support initialized 00:04:39.108 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:39.108 00:04:39.108 00:04:39.108 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.108 http://cunit.sourceforge.net/ 00:04:39.108 00:04:39.108 00:04:39.108 Suite: memory 00:04:39.108 Test: test ... 00:04:39.108 register 0x200000200000 2097152 00:04:39.108 malloc 3145728 00:04:39.108 register 0x200000400000 4194304 00:04:39.108 buf 0x200000500000 len 3145728 PASSED 00:04:39.108 malloc 64 00:04:39.108 buf 0x2000004fff40 len 64 PASSED 00:04:39.108 malloc 4194304 00:04:39.108 register 0x200000800000 6291456 00:04:39.108 buf 0x200000a00000 len 4194304 PASSED 00:04:39.108 free 0x200000500000 3145728 00:04:39.108 free 0x2000004fff40 64 00:04:39.108 unregister 0x200000400000 4194304 PASSED 00:04:39.108 free 0x200000a00000 4194304 00:04:39.108 unregister 0x200000800000 6291456 PASSED 00:04:39.108 malloc 8388608 00:04:39.108 register 0x200000400000 10485760 00:04:39.108 buf 0x200000600000 len 8388608 PASSED 00:04:39.108 free 0x200000600000 8388608 00:04:39.108 unregister 0x200000400000 10485760 PASSED 00:04:39.108 passed 00:04:39.108 00:04:39.108 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.109 suites 1 1 n/a 0 0 00:04:39.109 tests 1 1 1 0 0 00:04:39.109 asserts 15 15 15 0 n/a 00:04:39.109 00:04:39.109 Elapsed time = 0.005 seconds 00:04:39.109 00:04:39.109 real 0m0.047s 00:04:39.109 user 0m0.010s 00:04:39.109 sys 0m0.036s 00:04:39.109 10:59:36 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.109 10:59:36 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:39.109 ************************************ 00:04:39.109 END TEST env_mem_callbacks 00:04:39.109 ************************************ 00:04:39.109 00:04:39.109 real 0m6.093s 00:04:39.109 user 0m4.253s 00:04:39.109 sys 0m0.916s 00:04:39.109 10:59:36 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.109 10:59:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.109 ************************************ 00:04:39.109 END TEST env 00:04:39.109 ************************************ 00:04:39.369 10:59:36 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:39.369 10:59:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.369 10:59:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.369 10:59:36 -- common/autotest_common.sh@10 -- # set +x 00:04:39.369 ************************************ 00:04:39.369 START TEST rpc 00:04:39.369 ************************************ 00:04:39.369 10:59:36 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:39.369 * Looking for test storage... 00:04:39.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:39.369 10:59:36 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:39.369 10:59:36 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:39.369 10:59:36 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:39.369 10:59:36 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:39.369 10:59:36 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.369 10:59:36 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.369 10:59:36 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.369 10:59:36 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.369 10:59:36 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.369 10:59:36 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.369 10:59:36 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.369 10:59:36 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.369 10:59:36 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.369 10:59:36 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.369 10:59:36 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.369 10:59:36 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:39.369 10:59:36 rpc -- scripts/common.sh@345 -- # : 1 00:04:39.369 10:59:36 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.369 10:59:36 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.369 10:59:36 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:39.369 10:59:36 rpc -- scripts/common.sh@353 -- # local d=1 00:04:39.369 10:59:36 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.369 10:59:36 rpc -- scripts/common.sh@355 -- # echo 1 00:04:39.369 10:59:36 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.369 10:59:36 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:39.369 10:59:36 rpc -- scripts/common.sh@353 -- # local d=2 00:04:39.369 10:59:36 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.369 10:59:36 rpc -- scripts/common.sh@355 -- # echo 2 00:04:39.369 10:59:36 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.369 10:59:36 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.369 10:59:36 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.369 10:59:36 rpc -- scripts/common.sh@368 -- # return 0 00:04:39.369 10:59:36 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.369 10:59:36 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:39.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.369 --rc genhtml_branch_coverage=1 00:04:39.369 --rc genhtml_function_coverage=1 00:04:39.369 --rc genhtml_legend=1 00:04:39.369 --rc geninfo_all_blocks=1 00:04:39.369 --rc geninfo_unexecuted_blocks=1 00:04:39.369 00:04:39.369 ' 00:04:39.369 10:59:36 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:39.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.369 --rc genhtml_branch_coverage=1 00:04:39.369 --rc genhtml_function_coverage=1 00:04:39.369 --rc genhtml_legend=1 00:04:39.369 --rc geninfo_all_blocks=1 00:04:39.369 --rc geninfo_unexecuted_blocks=1 00:04:39.369 00:04:39.369 ' 00:04:39.369 10:59:36 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:39.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.369 --rc genhtml_branch_coverage=1 00:04:39.369 --rc genhtml_function_coverage=1 00:04:39.369 --rc genhtml_legend=1 00:04:39.369 --rc geninfo_all_blocks=1 00:04:39.369 --rc geninfo_unexecuted_blocks=1 00:04:39.369 00:04:39.369 ' 00:04:39.369 10:59:36 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:39.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.369 --rc genhtml_branch_coverage=1 00:04:39.369 --rc genhtml_function_coverage=1 00:04:39.369 --rc genhtml_legend=1 00:04:39.369 --rc geninfo_all_blocks=1 00:04:39.369 --rc geninfo_unexecuted_blocks=1 00:04:39.369 00:04:39.369 ' 00:04:39.369 10:59:36 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1843743 00:04:39.369 10:59:36 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.369 10:59:36 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1843743 00:04:39.369 10:59:36 rpc -- common/autotest_common.sh@831 -- # '[' -z 1843743 ']' 00:04:39.369 10:59:36 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.369 10:59:36 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.369 10:59:36 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:39.369 10:59:36 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.369 10:59:36 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.369 10:59:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.369 [2024-10-06 10:59:36.930806] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:04:39.369 [2024-10-06 10:59:36.930850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1843743 ] 00:04:39.629 [2024-10-06 10:59:36.986685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.629 [2024-10-06 10:59:37.026726] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:39.629 [2024-10-06 10:59:37.026767] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1843743' to capture a snapshot of events at runtime. 00:04:39.629 [2024-10-06 10:59:37.026775] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:39.629 [2024-10-06 10:59:37.026781] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:39.629 [2024-10-06 10:59:37.026788] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1843743 for offline analysis/debug. 00:04:39.629 [2024-10-06 10:59:37.027293] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.888 10:59:37 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:39.888 10:59:37 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:39.888 10:59:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:39.888 10:59:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:39.888 10:59:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:39.888 10:59:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:39.888 10:59:37 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.888 10:59:37 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.888 10:59:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.889 ************************************ 00:04:39.889 START TEST rpc_integrity 00:04:39.889 ************************************ 00:04:39.889 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:39.889 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.889 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.889 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.889 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.889 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.889 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.889 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.889 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.889 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.889 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.889 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.889 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:39.889 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.889 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.889 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.889 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.889 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.889 { 00:04:39.889 "name": "Malloc0", 00:04:39.889 "aliases": [ 00:04:39.889 "044eca9f-14f2-47c5-a1df-6b5db8120499" 00:04:39.889 ], 00:04:39.889 "product_name": "Malloc disk", 00:04:39.889 "block_size": 512, 00:04:39.889 "num_blocks": 16384, 00:04:39.889 "uuid": "044eca9f-14f2-47c5-a1df-6b5db8120499", 00:04:39.889 "assigned_rate_limits": { 00:04:39.889 "rw_ios_per_sec": 0, 00:04:39.889 "rw_mbytes_per_sec": 0, 00:04:39.889 "r_mbytes_per_sec": 0, 00:04:39.889 "w_mbytes_per_sec": 0 00:04:39.889 }, 00:04:39.889 "claimed": false, 00:04:39.889 "zoned": false, 00:04:39.889 "supported_io_types": { 00:04:39.889 "read": true, 00:04:39.889 "write": true, 00:04:39.889 "unmap": true, 00:04:39.889 "flush": true, 00:04:39.889 "reset": true, 00:04:39.889 "nvme_admin": false, 00:04:39.889 "nvme_io": false, 00:04:39.889 "nvme_io_md": false, 00:04:39.889 "write_zeroes": true, 00:04:39.889 "zcopy": true, 00:04:39.889 "get_zone_info": false, 00:04:39.889 "zone_management": false, 00:04:39.889 "zone_append": false, 00:04:39.889 "compare": false, 00:04:39.889 "compare_and_write": false, 00:04:39.889 "abort": true, 00:04:39.889 "seek_hole": false, 00:04:39.889 "seek_data": false, 00:04:39.889 "copy": true, 00:04:39.889 "nvme_iov_md": false 00:04:39.889 }, 00:04:39.889 "memory_domains": [ 00:04:39.889 { 00:04:39.889 "dma_device_id": "system", 00:04:39.889 "dma_device_type": 1 00:04:39.889 }, 00:04:39.889 { 00:04:39.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.889 "dma_device_type": 2 00:04:39.889 } 00:04:39.889 ], 00:04:39.889 "driver_specific": {} 00:04:39.889 } 00:04:39.889 ]' 00:04:39.889 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.889 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.889 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:39.889 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.889 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.889 [2024-10-06 10:59:37.376231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:39.889 [2024-10-06 10:59:37.376261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.889 [2024-10-06 10:59:37.376273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc1a580 00:04:39.889 [2024-10-06 10:59:37.376280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.889 [2024-10-06 10:59:37.377343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.889 [2024-10-06 10:59:37.377363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.889 Passthru0 00:04:39.889 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.889 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.889 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.889 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.889 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.889 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.889 { 00:04:39.889 "name": "Malloc0", 00:04:39.889 "aliases": [ 00:04:39.889 "044eca9f-14f2-47c5-a1df-6b5db8120499" 00:04:39.889 ], 00:04:39.889 "product_name": "Malloc disk", 00:04:39.889 "block_size": 512, 00:04:39.889 "num_blocks": 16384, 00:04:39.889 "uuid": "044eca9f-14f2-47c5-a1df-6b5db8120499", 00:04:39.889 "assigned_rate_limits": { 00:04:39.889 "rw_ios_per_sec": 0, 00:04:39.889 "rw_mbytes_per_sec": 0, 00:04:39.889 "r_mbytes_per_sec": 0, 00:04:39.889 "w_mbytes_per_sec": 0 00:04:39.889 }, 00:04:39.889 "claimed": true, 00:04:39.889 "claim_type": "exclusive_write", 00:04:39.889 "zoned": false, 00:04:39.889 "supported_io_types": { 00:04:39.889 "read": true, 00:04:39.889 "write": true, 00:04:39.889 "unmap": true, 00:04:39.889 "flush": true, 00:04:39.889 "reset": true, 00:04:39.889 "nvme_admin": false, 00:04:39.889 "nvme_io": false, 00:04:39.889 "nvme_io_md": false, 00:04:39.889 "write_zeroes": true, 00:04:39.889 "zcopy": true, 00:04:39.889 "get_zone_info": false, 00:04:39.889 "zone_management": false, 00:04:39.889 "zone_append": false, 00:04:39.889 "compare": false, 00:04:39.889 "compare_and_write": false, 00:04:39.889 "abort": true, 00:04:39.889 "seek_hole": false, 00:04:39.889 "seek_data": false, 00:04:39.889 "copy": true, 00:04:39.889 "nvme_iov_md": false 00:04:39.889 }, 00:04:39.889 "memory_domains": [ 00:04:39.889 { 00:04:39.889 "dma_device_id": "system", 00:04:39.889 "dma_device_type": 1 00:04:39.889 }, 00:04:39.889 { 00:04:39.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.889 "dma_device_type": 2 00:04:39.889 } 00:04:39.889 ], 00:04:39.889 "driver_specific": {} 00:04:39.889 }, 00:04:39.889 { 00:04:39.889 "name": "Passthru0", 00:04:39.889 "aliases": [ 00:04:39.889 "54f34340-03e1-50d6-9f35-67ac84cffaa9" 00:04:39.889 ], 00:04:39.889 "product_name": "passthru", 00:04:39.889 "block_size": 512, 00:04:39.889 "num_blocks": 16384, 00:04:39.889 "uuid": "54f34340-03e1-50d6-9f35-67ac84cffaa9", 00:04:39.889 "assigned_rate_limits": { 00:04:39.889 "rw_ios_per_sec": 0, 00:04:39.889 "rw_mbytes_per_sec": 0, 00:04:39.889 "r_mbytes_per_sec": 0, 00:04:39.889 "w_mbytes_per_sec": 0 00:04:39.889 }, 00:04:39.889 "claimed": false, 00:04:39.889 "zoned": false, 00:04:39.889 "supported_io_types": { 00:04:39.889 "read": true, 00:04:39.889 "write": true, 00:04:39.889 "unmap": true, 00:04:39.889 "flush": true, 00:04:39.889 "reset": true, 00:04:39.889 "nvme_admin": false, 00:04:39.889 "nvme_io": false, 00:04:39.889 "nvme_io_md": false, 00:04:39.890 "write_zeroes": true, 00:04:39.890 "zcopy": true, 00:04:39.890 "get_zone_info": false, 00:04:39.890 "zone_management": false, 00:04:39.890 "zone_append": false, 00:04:39.890 "compare": false, 00:04:39.890 "compare_and_write": false, 00:04:39.890 "abort": true, 00:04:39.890 "seek_hole": false, 00:04:39.890 "seek_data": false, 00:04:39.890 "copy": true, 00:04:39.890 "nvme_iov_md": false 00:04:39.890 }, 00:04:39.890 "memory_domains": [ 00:04:39.890 { 00:04:39.890 "dma_device_id": "system", 00:04:39.890 "dma_device_type": 1 00:04:39.890 }, 00:04:39.890 { 00:04:39.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.890 "dma_device_type": 2 00:04:39.890 } 00:04:39.890 ], 00:04:39.890 "driver_specific": { 00:04:39.890 "passthru": { 00:04:39.890 "name": "Passthru0", 00:04:39.890 "base_bdev_name": "Malloc0" 00:04:39.890 } 00:04:39.890 } 00:04:39.890 } 00:04:39.890 ]' 00:04:39.890 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:39.890 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:39.890 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:39.890 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.890 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.890 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.890 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:39.890 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.890 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.890 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.890 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:39.890 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.890 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.149 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.150 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.150 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.150 10:59:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.150 00:04:40.150 real 0m0.258s 00:04:40.150 user 0m0.168s 00:04:40.150 sys 0m0.029s 00:04:40.150 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.150 10:59:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.150 ************************************ 00:04:40.150 END TEST rpc_integrity 00:04:40.150 ************************************ 00:04:40.150 10:59:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:40.150 10:59:37 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.150 10:59:37 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.150 10:59:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.150 ************************************ 00:04:40.150 START TEST rpc_plugins 00:04:40.150 ************************************ 00:04:40.150 10:59:37 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:40.150 10:59:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:40.150 10:59:37 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.150 10:59:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.150 10:59:37 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.150 10:59:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:40.150 10:59:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:40.150 10:59:37 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.150 10:59:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.150 10:59:37 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.150 10:59:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:40.150 { 00:04:40.150 "name": "Malloc1", 00:04:40.150 "aliases": [ 00:04:40.150 "55268d06-e04e-4ba6-95d4-ccbe017263e2" 00:04:40.150 ], 00:04:40.150 "product_name": "Malloc disk", 00:04:40.150 "block_size": 4096, 00:04:40.150 "num_blocks": 256, 00:04:40.150 "uuid": "55268d06-e04e-4ba6-95d4-ccbe017263e2", 00:04:40.150 "assigned_rate_limits": { 00:04:40.150 "rw_ios_per_sec": 0, 00:04:40.150 "rw_mbytes_per_sec": 0, 00:04:40.150 "r_mbytes_per_sec": 0, 00:04:40.150 "w_mbytes_per_sec": 0 00:04:40.150 }, 00:04:40.150 "claimed": false, 00:04:40.150 "zoned": false, 00:04:40.150 "supported_io_types": { 00:04:40.150 "read": true, 00:04:40.150 "write": true, 00:04:40.150 "unmap": true, 00:04:40.150 "flush": true, 00:04:40.150 "reset": true, 00:04:40.150 "nvme_admin": false, 00:04:40.150 "nvme_io": false, 00:04:40.150 "nvme_io_md": false, 00:04:40.150 "write_zeroes": true, 00:04:40.150 "zcopy": true, 00:04:40.150 "get_zone_info": false, 00:04:40.150 "zone_management": false, 00:04:40.150 "zone_append": false, 00:04:40.150 "compare": false, 00:04:40.150 "compare_and_write": false, 00:04:40.150 "abort": true, 00:04:40.150 "seek_hole": false, 00:04:40.150 "seek_data": false, 00:04:40.150 "copy": true, 00:04:40.150 "nvme_iov_md": false 00:04:40.150 }, 00:04:40.150 "memory_domains": [ 00:04:40.150 { 00:04:40.150 "dma_device_id": "system", 00:04:40.150 "dma_device_type": 1 00:04:40.150 }, 00:04:40.150 { 00:04:40.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.150 "dma_device_type": 2 00:04:40.150 } 00:04:40.150 ], 00:04:40.150 "driver_specific": {} 00:04:40.150 } 00:04:40.150 ]' 00:04:40.150 10:59:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:40.150 10:59:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:40.150 10:59:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:40.150 10:59:37 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.150 10:59:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.150 10:59:37 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.150 10:59:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:40.150 10:59:37 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.150 10:59:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.150 10:59:37 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.150 10:59:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:40.150 10:59:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:40.150 10:59:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:40.150 00:04:40.150 real 0m0.139s 00:04:40.150 user 0m0.082s 00:04:40.150 sys 0m0.019s 00:04:40.150 10:59:37 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.150 10:59:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.150 ************************************ 00:04:40.150 END TEST rpc_plugins 00:04:40.150 ************************************ 00:04:40.409 10:59:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:40.409 10:59:37 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.409 10:59:37 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.409 10:59:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.409 ************************************ 00:04:40.409 START TEST rpc_trace_cmd_test 00:04:40.409 ************************************ 00:04:40.409 10:59:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:40.409 10:59:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:40.409 10:59:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:40.409 10:59:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.409 10:59:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:40.409 10:59:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.409 10:59:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:40.409 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1843743", 00:04:40.409 "tpoint_group_mask": "0x8", 00:04:40.409 "iscsi_conn": { 00:04:40.409 "mask": "0x2", 00:04:40.409 "tpoint_mask": "0x0" 00:04:40.409 }, 00:04:40.409 "scsi": { 00:04:40.409 "mask": "0x4", 00:04:40.409 "tpoint_mask": "0x0" 00:04:40.409 }, 00:04:40.409 "bdev": { 00:04:40.409 "mask": "0x8", 00:04:40.409 "tpoint_mask": "0xffffffffffffffff" 00:04:40.409 }, 00:04:40.409 "nvmf_rdma": { 00:04:40.409 "mask": "0x10", 00:04:40.409 "tpoint_mask": "0x0" 00:04:40.409 }, 00:04:40.409 "nvmf_tcp": { 00:04:40.409 "mask": "0x20", 00:04:40.409 "tpoint_mask": "0x0" 00:04:40.409 }, 00:04:40.409 "ftl": { 00:04:40.409 "mask": "0x40", 00:04:40.409 "tpoint_mask": "0x0" 00:04:40.409 }, 00:04:40.409 "blobfs": { 00:04:40.409 "mask": "0x80", 00:04:40.409 "tpoint_mask": "0x0" 00:04:40.409 }, 00:04:40.409 "dsa": { 00:04:40.409 "mask": "0x200", 00:04:40.409 "tpoint_mask": "0x0" 00:04:40.409 }, 00:04:40.409 "thread": { 00:04:40.409 "mask": "0x400", 00:04:40.409 "tpoint_mask": "0x0" 00:04:40.409 }, 00:04:40.409 "nvme_pcie": { 00:04:40.409 "mask": "0x800", 00:04:40.409 "tpoint_mask": "0x0" 00:04:40.409 }, 00:04:40.409 "iaa": { 00:04:40.409 "mask": "0x1000", 00:04:40.409 "tpoint_mask": "0x0" 00:04:40.409 }, 00:04:40.409 "nvme_tcp": { 00:04:40.409 "mask": "0x2000", 00:04:40.409 "tpoint_mask": "0x0" 00:04:40.409 }, 00:04:40.409 "bdev_nvme": { 00:04:40.409 "mask": "0x4000", 00:04:40.409 "tpoint_mask": "0x0" 00:04:40.409 }, 00:04:40.409 "sock": { 00:04:40.409 "mask": "0x8000", 00:04:40.409 "tpoint_mask": "0x0" 00:04:40.409 }, 00:04:40.409 "blob": { 00:04:40.409 "mask": "0x10000", 00:04:40.409 "tpoint_mask": "0x0" 00:04:40.409 }, 00:04:40.409 "bdev_raid": { 00:04:40.409 "mask": "0x20000", 00:04:40.409 "tpoint_mask": "0x0" 00:04:40.409 }, 00:04:40.409 "scheduler": { 00:04:40.409 "mask": "0x40000", 00:04:40.409 "tpoint_mask": "0x0" 00:04:40.409 } 00:04:40.409 }' 00:04:40.409 10:59:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:40.409 10:59:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:40.409 10:59:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:40.409 10:59:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:40.409 10:59:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:40.409 10:59:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:40.409 10:59:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:40.409 10:59:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:40.409 10:59:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:40.669 10:59:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:40.669 00:04:40.669 real 0m0.229s 00:04:40.669 user 0m0.190s 00:04:40.669 sys 0m0.030s 00:04:40.669 10:59:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.669 10:59:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:40.669 ************************************ 00:04:40.669 END TEST rpc_trace_cmd_test 00:04:40.669 ************************************ 00:04:40.669 10:59:38 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:40.669 10:59:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:40.669 10:59:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:40.669 10:59:38 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.669 10:59:38 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.669 10:59:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.669 ************************************ 00:04:40.669 START TEST rpc_daemon_integrity 00:04:40.669 ************************************ 00:04:40.669 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:40.669 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:40.669 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.669 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.669 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.669 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:40.669 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:40.669 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:40.669 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:40.669 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.669 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.669 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.669 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:40.669 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:40.670 { 00:04:40.670 "name": "Malloc2", 00:04:40.670 "aliases": [ 00:04:40.670 "a85d6b11-9a32-4012-a84c-83c03cea2515" 00:04:40.670 ], 00:04:40.670 "product_name": "Malloc disk", 00:04:40.670 "block_size": 512, 00:04:40.670 "num_blocks": 16384, 00:04:40.670 "uuid": "a85d6b11-9a32-4012-a84c-83c03cea2515", 00:04:40.670 "assigned_rate_limits": { 00:04:40.670 "rw_ios_per_sec": 0, 00:04:40.670 "rw_mbytes_per_sec": 0, 00:04:40.670 "r_mbytes_per_sec": 0, 00:04:40.670 "w_mbytes_per_sec": 0 00:04:40.670 }, 00:04:40.670 "claimed": false, 00:04:40.670 "zoned": false, 00:04:40.670 "supported_io_types": { 00:04:40.670 "read": true, 00:04:40.670 "write": true, 00:04:40.670 "unmap": true, 00:04:40.670 "flush": true, 00:04:40.670 "reset": true, 00:04:40.670 "nvme_admin": false, 00:04:40.670 "nvme_io": false, 00:04:40.670 "nvme_io_md": false, 00:04:40.670 "write_zeroes": true, 00:04:40.670 "zcopy": true, 00:04:40.670 "get_zone_info": false, 00:04:40.670 "zone_management": false, 00:04:40.670 "zone_append": false, 00:04:40.670 "compare": false, 00:04:40.670 "compare_and_write": false, 00:04:40.670 "abort": true, 00:04:40.670 "seek_hole": false, 00:04:40.670 "seek_data": false, 00:04:40.670 "copy": true, 00:04:40.670 "nvme_iov_md": false 00:04:40.670 }, 00:04:40.670 "memory_domains": [ 00:04:40.670 { 00:04:40.670 "dma_device_id": "system", 00:04:40.670 "dma_device_type": 1 00:04:40.670 }, 00:04:40.670 { 00:04:40.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.670 "dma_device_type": 2 00:04:40.670 } 00:04:40.670 ], 00:04:40.670 "driver_specific": {} 00:04:40.670 } 00:04:40.670 ]' 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.670 [2024-10-06 10:59:38.186428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:40.670 [2024-10-06 10:59:38.186456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:40.670 [2024-10-06 10:59:38.186467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc07530 00:04:40.670 [2024-10-06 10:59:38.186473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:40.670 [2024-10-06 10:59:38.187413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:40.670 [2024-10-06 10:59:38.187434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:40.670 Passthru0 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:40.670 { 00:04:40.670 "name": "Malloc2", 00:04:40.670 "aliases": [ 00:04:40.670 "a85d6b11-9a32-4012-a84c-83c03cea2515" 00:04:40.670 ], 00:04:40.670 "product_name": "Malloc disk", 00:04:40.670 "block_size": 512, 00:04:40.670 "num_blocks": 16384, 00:04:40.670 "uuid": "a85d6b11-9a32-4012-a84c-83c03cea2515", 00:04:40.670 "assigned_rate_limits": { 00:04:40.670 "rw_ios_per_sec": 0, 00:04:40.670 "rw_mbytes_per_sec": 0, 00:04:40.670 "r_mbytes_per_sec": 0, 00:04:40.670 "w_mbytes_per_sec": 0 00:04:40.670 }, 00:04:40.670 "claimed": true, 00:04:40.670 "claim_type": "exclusive_write", 00:04:40.670 "zoned": false, 00:04:40.670 "supported_io_types": { 00:04:40.670 "read": true, 00:04:40.670 "write": true, 00:04:40.670 "unmap": true, 00:04:40.670 "flush": true, 00:04:40.670 "reset": true, 00:04:40.670 "nvme_admin": false, 00:04:40.670 "nvme_io": false, 00:04:40.670 "nvme_io_md": false, 00:04:40.670 "write_zeroes": true, 00:04:40.670 "zcopy": true, 00:04:40.670 "get_zone_info": false, 00:04:40.670 "zone_management": false, 00:04:40.670 "zone_append": false, 00:04:40.670 "compare": false, 00:04:40.670 "compare_and_write": false, 00:04:40.670 "abort": true, 00:04:40.670 "seek_hole": false, 00:04:40.670 "seek_data": false, 00:04:40.670 "copy": true, 00:04:40.670 "nvme_iov_md": false 00:04:40.670 }, 00:04:40.670 "memory_domains": [ 00:04:40.670 { 00:04:40.670 "dma_device_id": "system", 00:04:40.670 "dma_device_type": 1 00:04:40.670 }, 00:04:40.670 { 00:04:40.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.670 "dma_device_type": 2 00:04:40.670 } 00:04:40.670 ], 00:04:40.670 "driver_specific": {} 00:04:40.670 }, 00:04:40.670 { 00:04:40.670 "name": "Passthru0", 00:04:40.670 "aliases": [ 00:04:40.670 "802ca1ca-e872-5e57-a170-22e4da5c4281" 00:04:40.670 ], 00:04:40.670 "product_name": "passthru", 00:04:40.670 "block_size": 512, 00:04:40.670 "num_blocks": 16384, 00:04:40.670 "uuid": "802ca1ca-e872-5e57-a170-22e4da5c4281", 00:04:40.670 "assigned_rate_limits": { 00:04:40.670 "rw_ios_per_sec": 0, 00:04:40.670 "rw_mbytes_per_sec": 0, 00:04:40.670 "r_mbytes_per_sec": 0, 00:04:40.670 "w_mbytes_per_sec": 0 00:04:40.670 }, 00:04:40.670 "claimed": false, 00:04:40.670 "zoned": false, 00:04:40.670 "supported_io_types": { 00:04:40.670 "read": true, 00:04:40.670 "write": true, 00:04:40.670 "unmap": true, 00:04:40.670 "flush": true, 00:04:40.670 "reset": true, 00:04:40.670 "nvme_admin": false, 00:04:40.670 "nvme_io": false, 00:04:40.670 "nvme_io_md": false, 00:04:40.670 "write_zeroes": true, 00:04:40.670 "zcopy": true, 00:04:40.670 "get_zone_info": false, 00:04:40.670 "zone_management": false, 00:04:40.670 "zone_append": false, 00:04:40.670 "compare": false, 00:04:40.670 "compare_and_write": false, 00:04:40.670 "abort": true, 00:04:40.670 "seek_hole": false, 00:04:40.670 "seek_data": false, 00:04:40.670 "copy": true, 00:04:40.670 "nvme_iov_md": false 00:04:40.670 }, 00:04:40.670 "memory_domains": [ 00:04:40.670 { 00:04:40.670 "dma_device_id": "system", 00:04:40.670 "dma_device_type": 1 00:04:40.670 }, 00:04:40.670 { 00:04:40.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.670 "dma_device_type": 2 00:04:40.670 } 00:04:40.670 ], 00:04:40.670 "driver_specific": { 00:04:40.670 "passthru": { 00:04:40.670 "name": "Passthru0", 00:04:40.670 "base_bdev_name": "Malloc2" 00:04:40.670 } 00:04:40.670 } 00:04:40.670 } 00:04:40.670 ]' 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.670 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.931 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.931 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:40.931 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.931 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.931 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.931 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.931 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.931 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.931 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.931 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.931 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.932 10:59:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.932 00:04:40.932 real 0m0.237s 00:04:40.932 user 0m0.153s 00:04:40.932 sys 0m0.026s 00:04:40.932 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.932 10:59:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.932 ************************************ 00:04:40.932 END TEST rpc_daemon_integrity 00:04:40.932 ************************************ 00:04:40.932 10:59:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:40.932 10:59:38 rpc -- rpc/rpc.sh@84 -- # killprocess 1843743 00:04:40.932 10:59:38 rpc -- common/autotest_common.sh@950 -- # '[' -z 1843743 ']' 00:04:40.932 10:59:38 rpc -- common/autotest_common.sh@954 -- # kill -0 1843743 00:04:40.932 10:59:38 rpc -- common/autotest_common.sh@955 -- # uname 00:04:40.932 10:59:38 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:40.932 10:59:38 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1843743 00:04:40.932 10:59:38 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:40.932 10:59:38 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:40.932 10:59:38 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1843743' 00:04:40.932 killing process with pid 1843743 00:04:40.932 10:59:38 rpc -- common/autotest_common.sh@969 -- # kill 1843743 00:04:40.932 10:59:38 rpc -- common/autotest_common.sh@974 -- # wait 1843743 00:04:41.189 00:04:41.189 real 0m1.988s 00:04:41.189 user 0m2.542s 00:04:41.189 sys 0m0.650s 00:04:41.189 10:59:38 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.189 10:59:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.189 ************************************ 00:04:41.189 END TEST rpc 00:04:41.189 ************************************ 00:04:41.189 10:59:38 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:41.189 10:59:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.189 10:59:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.189 10:59:38 -- common/autotest_common.sh@10 -- # set +x 00:04:41.448 ************************************ 00:04:41.448 START TEST skip_rpc 00:04:41.448 ************************************ 00:04:41.448 10:59:38 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:41.448 * Looking for test storage... 00:04:41.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:41.448 10:59:38 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:41.448 10:59:38 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:41.448 10:59:38 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:41.448 10:59:38 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.448 10:59:38 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:41.448 10:59:38 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.448 10:59:38 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:41.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.448 --rc genhtml_branch_coverage=1 00:04:41.448 --rc genhtml_function_coverage=1 00:04:41.448 --rc genhtml_legend=1 00:04:41.448 --rc geninfo_all_blocks=1 00:04:41.448 --rc geninfo_unexecuted_blocks=1 00:04:41.448 00:04:41.448 ' 00:04:41.448 10:59:38 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:41.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.448 --rc genhtml_branch_coverage=1 00:04:41.448 --rc genhtml_function_coverage=1 00:04:41.448 --rc genhtml_legend=1 00:04:41.448 --rc geninfo_all_blocks=1 00:04:41.448 --rc geninfo_unexecuted_blocks=1 00:04:41.448 00:04:41.448 ' 00:04:41.448 10:59:38 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:41.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.448 --rc genhtml_branch_coverage=1 00:04:41.448 --rc genhtml_function_coverage=1 00:04:41.448 --rc genhtml_legend=1 00:04:41.448 --rc geninfo_all_blocks=1 00:04:41.448 --rc geninfo_unexecuted_blocks=1 00:04:41.448 00:04:41.448 ' 00:04:41.448 10:59:38 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:41.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.448 --rc genhtml_branch_coverage=1 00:04:41.448 --rc genhtml_function_coverage=1 00:04:41.448 --rc genhtml_legend=1 00:04:41.448 --rc geninfo_all_blocks=1 00:04:41.448 --rc geninfo_unexecuted_blocks=1 00:04:41.448 00:04:41.448 ' 00:04:41.448 10:59:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:41.448 10:59:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:41.448 10:59:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:41.448 10:59:38 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.448 10:59:38 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.448 10:59:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.448 ************************************ 00:04:41.448 START TEST skip_rpc 00:04:41.448 ************************************ 00:04:41.448 10:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:41.448 10:59:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1844366 00:04:41.448 10:59:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.448 10:59:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:41.448 10:59:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:41.708 [2024-10-06 10:59:39.037294] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:04:41.708 [2024-10-06 10:59:39.037332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1844366 ] 00:04:41.708 [2024-10-06 10:59:39.093160] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.708 [2024-10-06 10:59:39.132525] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1844366 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1844366 ']' 00:04:46.983 10:59:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1844366 00:04:46.983 10:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:46.983 10:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.983 10:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1844366 00:04:46.983 10:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.983 10:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.983 10:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1844366' 00:04:46.983 killing process with pid 1844366 00:04:46.983 10:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1844366 00:04:46.983 10:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1844366 00:04:46.983 00:04:46.983 real 0m5.385s 00:04:46.983 user 0m5.142s 00:04:46.983 sys 0m0.283s 00:04:46.983 10:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.983 10:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.983 ************************************ 00:04:46.983 END TEST skip_rpc 00:04:46.983 ************************************ 00:04:46.983 10:59:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:46.983 10:59:44 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.983 10:59:44 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.983 10:59:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.983 ************************************ 00:04:46.983 START TEST skip_rpc_with_json 00:04:46.983 ************************************ 00:04:46.983 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:46.983 10:59:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:46.983 10:59:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1845294 00:04:46.983 10:59:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.983 10:59:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1845294 00:04:46.983 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1845294 ']' 00:04:46.983 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.983 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.983 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.983 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.983 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.983 10:59:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.983 [2024-10-06 10:59:44.488453] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:04:46.983 [2024-10-06 10:59:44.488494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1845294 ] 00:04:46.983 [2024-10-06 10:59:44.541378] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.244 [2024-10-06 10:59:44.581306] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.244 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:47.244 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:47.244 10:59:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:47.244 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.244 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.244 [2024-10-06 10:59:44.771016] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:47.244 request: 00:04:47.244 { 00:04:47.244 "trtype": "tcp", 00:04:47.244 "method": "nvmf_get_transports", 00:04:47.244 "req_id": 1 00:04:47.244 } 00:04:47.244 Got JSON-RPC error response 00:04:47.244 response: 00:04:47.244 { 00:04:47.244 "code": -19, 00:04:47.244 "message": "No such device" 00:04:47.244 } 00:04:47.244 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:47.244 10:59:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:47.244 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.244 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.244 [2024-10-06 10:59:44.779120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.244 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.244 10:59:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:47.244 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.244 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.503 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.503 10:59:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:47.503 { 00:04:47.503 "subsystems": [ 00:04:47.503 { 00:04:47.503 "subsystem": "fsdev", 00:04:47.503 "config": [ 00:04:47.503 { 00:04:47.503 "method": "fsdev_set_opts", 00:04:47.503 "params": { 00:04:47.503 "fsdev_io_pool_size": 65535, 00:04:47.503 "fsdev_io_cache_size": 256 00:04:47.503 } 00:04:47.503 } 00:04:47.503 ] 00:04:47.503 }, 00:04:47.503 { 00:04:47.503 "subsystem": "vfio_user_target", 00:04:47.503 "config": null 00:04:47.503 }, 00:04:47.503 { 00:04:47.503 "subsystem": "keyring", 00:04:47.503 "config": [] 00:04:47.503 }, 00:04:47.503 { 00:04:47.503 "subsystem": "iobuf", 00:04:47.503 "config": [ 00:04:47.503 { 00:04:47.503 "method": "iobuf_set_options", 00:04:47.503 "params": { 00:04:47.503 "small_pool_count": 8192, 00:04:47.503 "large_pool_count": 1024, 00:04:47.503 "small_bufsize": 8192, 00:04:47.503 "large_bufsize": 135168 00:04:47.503 } 00:04:47.503 } 00:04:47.503 ] 00:04:47.503 }, 00:04:47.503 { 00:04:47.503 "subsystem": "sock", 00:04:47.503 "config": [ 00:04:47.503 { 00:04:47.503 "method": "sock_set_default_impl", 00:04:47.503 "params": { 00:04:47.503 "impl_name": "posix" 00:04:47.503 } 00:04:47.503 }, 00:04:47.503 { 00:04:47.503 "method": "sock_impl_set_options", 00:04:47.503 "params": { 00:04:47.503 "impl_name": "ssl", 00:04:47.503 "recv_buf_size": 4096, 00:04:47.503 "send_buf_size": 4096, 00:04:47.503 "enable_recv_pipe": true, 00:04:47.503 "enable_quickack": false, 00:04:47.503 "enable_placement_id": 0, 00:04:47.503 "enable_zerocopy_send_server": true, 00:04:47.503 "enable_zerocopy_send_client": false, 00:04:47.503 "zerocopy_threshold": 0, 00:04:47.503 "tls_version": 0, 00:04:47.503 "enable_ktls": false 00:04:47.503 } 00:04:47.503 }, 00:04:47.503 { 00:04:47.503 "method": "sock_impl_set_options", 00:04:47.503 "params": { 00:04:47.503 "impl_name": "posix", 00:04:47.503 "recv_buf_size": 2097152, 00:04:47.503 "send_buf_size": 2097152, 00:04:47.503 "enable_recv_pipe": true, 00:04:47.503 "enable_quickack": false, 00:04:47.503 "enable_placement_id": 0, 00:04:47.503 "enable_zerocopy_send_server": true, 00:04:47.503 "enable_zerocopy_send_client": false, 00:04:47.503 "zerocopy_threshold": 0, 00:04:47.503 "tls_version": 0, 00:04:47.503 "enable_ktls": false 00:04:47.503 } 00:04:47.503 } 00:04:47.503 ] 00:04:47.503 }, 00:04:47.503 { 00:04:47.503 "subsystem": "vmd", 00:04:47.503 "config": [] 00:04:47.503 }, 00:04:47.503 { 00:04:47.503 "subsystem": "accel", 00:04:47.503 "config": [ 00:04:47.503 { 00:04:47.503 "method": "accel_set_options", 00:04:47.503 "params": { 00:04:47.503 "small_cache_size": 128, 00:04:47.503 "large_cache_size": 16, 00:04:47.503 "task_count": 2048, 00:04:47.503 "sequence_count": 2048, 00:04:47.503 "buf_count": 2048 00:04:47.503 } 00:04:47.503 } 00:04:47.503 ] 00:04:47.503 }, 00:04:47.503 { 00:04:47.503 "subsystem": "bdev", 00:04:47.503 "config": [ 00:04:47.503 { 00:04:47.503 "method": "bdev_set_options", 00:04:47.503 "params": { 00:04:47.503 "bdev_io_pool_size": 65535, 00:04:47.503 "bdev_io_cache_size": 256, 00:04:47.503 "bdev_auto_examine": true, 00:04:47.503 "iobuf_small_cache_size": 128, 00:04:47.503 "iobuf_large_cache_size": 16 00:04:47.503 } 00:04:47.503 }, 00:04:47.503 { 00:04:47.503 "method": "bdev_raid_set_options", 00:04:47.503 "params": { 00:04:47.503 "process_window_size_kb": 1024, 00:04:47.503 "process_max_bandwidth_mb_sec": 0 00:04:47.503 } 00:04:47.503 }, 00:04:47.503 { 00:04:47.503 "method": "bdev_iscsi_set_options", 00:04:47.503 "params": { 00:04:47.503 "timeout_sec": 30 00:04:47.503 } 00:04:47.503 }, 00:04:47.503 { 00:04:47.503 "method": "bdev_nvme_set_options", 00:04:47.503 "params": { 00:04:47.503 "action_on_timeout": "none", 00:04:47.503 "timeout_us": 0, 00:04:47.503 "timeout_admin_us": 0, 00:04:47.503 "keep_alive_timeout_ms": 10000, 00:04:47.503 "arbitration_burst": 0, 00:04:47.503 "low_priority_weight": 0, 00:04:47.503 "medium_priority_weight": 0, 00:04:47.503 "high_priority_weight": 0, 00:04:47.504 "nvme_adminq_poll_period_us": 10000, 00:04:47.504 "nvme_ioq_poll_period_us": 0, 00:04:47.504 "io_queue_requests": 0, 00:04:47.504 "delay_cmd_submit": true, 00:04:47.504 "transport_retry_count": 4, 00:04:47.504 "bdev_retry_count": 3, 00:04:47.504 "transport_ack_timeout": 0, 00:04:47.504 "ctrlr_loss_timeout_sec": 0, 00:04:47.504 "reconnect_delay_sec": 0, 00:04:47.504 "fast_io_fail_timeout_sec": 0, 00:04:47.504 "disable_auto_failback": false, 00:04:47.504 "generate_uuids": false, 00:04:47.504 "transport_tos": 0, 00:04:47.504 "nvme_error_stat": false, 00:04:47.504 "rdma_srq_size": 0, 00:04:47.504 "io_path_stat": false, 00:04:47.504 "allow_accel_sequence": false, 00:04:47.504 "rdma_max_cq_size": 0, 00:04:47.504 "rdma_cm_event_timeout_ms": 0, 00:04:47.504 "dhchap_digests": [ 00:04:47.504 "sha256", 00:04:47.504 "sha384", 00:04:47.504 "sha512" 00:04:47.504 ], 00:04:47.504 "dhchap_dhgroups": [ 00:04:47.504 "null", 00:04:47.504 "ffdhe2048", 00:04:47.504 "ffdhe3072", 00:04:47.504 "ffdhe4096", 00:04:47.504 "ffdhe6144", 00:04:47.504 "ffdhe8192" 00:04:47.504 ] 00:04:47.504 } 00:04:47.504 }, 00:04:47.504 { 00:04:47.504 "method": "bdev_nvme_set_hotplug", 00:04:47.504 "params": { 00:04:47.504 "period_us": 100000, 00:04:47.504 "enable": false 00:04:47.504 } 00:04:47.504 }, 00:04:47.504 { 00:04:47.504 "method": "bdev_wait_for_examine" 00:04:47.504 } 00:04:47.504 ] 00:04:47.504 }, 00:04:47.504 { 00:04:47.504 "subsystem": "scsi", 00:04:47.504 "config": null 00:04:47.504 }, 00:04:47.504 { 00:04:47.504 "subsystem": "scheduler", 00:04:47.504 "config": [ 00:04:47.504 { 00:04:47.504 "method": "framework_set_scheduler", 00:04:47.504 "params": { 00:04:47.504 "name": "static" 00:04:47.504 } 00:04:47.504 } 00:04:47.504 ] 00:04:47.504 }, 00:04:47.504 { 00:04:47.504 "subsystem": "vhost_scsi", 00:04:47.504 "config": [] 00:04:47.504 }, 00:04:47.504 { 00:04:47.504 "subsystem": "vhost_blk", 00:04:47.504 "config": [] 00:04:47.504 }, 00:04:47.504 { 00:04:47.504 "subsystem": "ublk", 00:04:47.504 "config": [] 00:04:47.504 }, 00:04:47.504 { 00:04:47.504 "subsystem": "nbd", 00:04:47.504 "config": [] 00:04:47.504 }, 00:04:47.504 { 00:04:47.504 "subsystem": "nvmf", 00:04:47.504 "config": [ 00:04:47.504 { 00:04:47.504 "method": "nvmf_set_config", 00:04:47.504 "params": { 00:04:47.504 "discovery_filter": "match_any", 00:04:47.504 "admin_cmd_passthru": { 00:04:47.504 "identify_ctrlr": false 00:04:47.504 }, 00:04:47.504 "dhchap_digests": [ 00:04:47.504 "sha256", 00:04:47.504 "sha384", 00:04:47.504 "sha512" 00:04:47.504 ], 00:04:47.504 "dhchap_dhgroups": [ 00:04:47.504 "null", 00:04:47.504 "ffdhe2048", 00:04:47.504 "ffdhe3072", 00:04:47.504 "ffdhe4096", 00:04:47.504 "ffdhe6144", 00:04:47.504 "ffdhe8192" 00:04:47.504 ] 00:04:47.504 } 00:04:47.504 }, 00:04:47.504 { 00:04:47.504 "method": "nvmf_set_max_subsystems", 00:04:47.504 "params": { 00:04:47.504 "max_subsystems": 1024 00:04:47.504 } 00:04:47.504 }, 00:04:47.504 { 00:04:47.504 "method": "nvmf_set_crdt", 00:04:47.504 "params": { 00:04:47.504 "crdt1": 0, 00:04:47.504 "crdt2": 0, 00:04:47.504 "crdt3": 0 00:04:47.504 } 00:04:47.504 }, 00:04:47.504 { 00:04:47.504 "method": "nvmf_create_transport", 00:04:47.504 "params": { 00:04:47.504 "trtype": "TCP", 00:04:47.504 "max_queue_depth": 128, 00:04:47.504 "max_io_qpairs_per_ctrlr": 127, 00:04:47.504 "in_capsule_data_size": 4096, 00:04:47.504 "max_io_size": 131072, 00:04:47.504 "io_unit_size": 131072, 00:04:47.504 "max_aq_depth": 128, 00:04:47.504 "num_shared_buffers": 511, 00:04:47.504 "buf_cache_size": 4294967295, 00:04:47.504 "dif_insert_or_strip": false, 00:04:47.504 "zcopy": false, 00:04:47.504 "c2h_success": true, 00:04:47.504 "sock_priority": 0, 00:04:47.504 "abort_timeout_sec": 1, 00:04:47.504 "ack_timeout": 0, 00:04:47.504 "data_wr_pool_size": 0 00:04:47.504 } 00:04:47.504 } 00:04:47.504 ] 00:04:47.504 }, 00:04:47.504 { 00:04:47.504 "subsystem": "iscsi", 00:04:47.504 "config": [ 00:04:47.504 { 00:04:47.504 "method": "iscsi_set_options", 00:04:47.504 "params": { 00:04:47.504 "node_base": "iqn.2016-06.io.spdk", 00:04:47.504 "max_sessions": 128, 00:04:47.504 "max_connections_per_session": 2, 00:04:47.504 "max_queue_depth": 64, 00:04:47.504 "default_time2wait": 2, 00:04:47.504 "default_time2retain": 20, 00:04:47.504 "first_burst_length": 8192, 00:04:47.504 "immediate_data": true, 00:04:47.504 "allow_duplicated_isid": false, 00:04:47.504 "error_recovery_level": 0, 00:04:47.504 "nop_timeout": 60, 00:04:47.504 "nop_in_interval": 30, 00:04:47.504 "disable_chap": false, 00:04:47.504 "require_chap": false, 00:04:47.504 "mutual_chap": false, 00:04:47.504 "chap_group": 0, 00:04:47.504 "max_large_datain_per_connection": 64, 00:04:47.504 "max_r2t_per_connection": 4, 00:04:47.504 "pdu_pool_size": 36864, 00:04:47.504 "immediate_data_pool_size": 16384, 00:04:47.504 "data_out_pool_size": 2048 00:04:47.504 } 00:04:47.504 } 00:04:47.504 ] 00:04:47.504 } 00:04:47.504 ] 00:04:47.504 } 00:04:47.504 10:59:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:47.504 10:59:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1845294 00:04:47.504 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1845294 ']' 00:04:47.504 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1845294 00:04:47.504 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:47.504 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.504 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1845294 00:04:47.504 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:47.504 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:47.504 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1845294' 00:04:47.504 killing process with pid 1845294 00:04:47.504 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1845294 00:04:47.504 10:59:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1845294 00:04:47.763 10:59:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1845330 00:04:47.763 10:59:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:47.763 10:59:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:53.054 10:59:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1845330 00:04:53.054 10:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1845330 ']' 00:04:53.054 10:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1845330 00:04:53.054 10:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:53.054 10:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.054 10:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1845330 00:04:53.054 10:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.054 10:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.054 10:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1845330' 00:04:53.054 killing process with pid 1845330 00:04:53.054 10:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1845330 00:04:53.054 10:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1845330 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:53.314 00:04:53.314 real 0m6.237s 00:04:53.314 user 0m5.923s 00:04:53.314 sys 0m0.581s 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.314 ************************************ 00:04:53.314 END TEST skip_rpc_with_json 00:04:53.314 ************************************ 00:04:53.314 10:59:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:53.314 10:59:50 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.314 10:59:50 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.314 10:59:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.314 ************************************ 00:04:53.314 START TEST skip_rpc_with_delay 00:04:53.314 ************************************ 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.314 [2024-10-06 10:59:50.784444] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:53.314 [2024-10-06 10:59:50.784501] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:53.314 00:04:53.314 real 0m0.065s 00:04:53.314 user 0m0.041s 00:04:53.314 sys 0m0.024s 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.314 10:59:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:53.314 ************************************ 00:04:53.314 END TEST skip_rpc_with_delay 00:04:53.314 ************************************ 00:04:53.314 10:59:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:53.314 10:59:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:53.314 10:59:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:53.314 10:59:50 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.314 10:59:50 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.314 10:59:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.314 ************************************ 00:04:53.314 START TEST exit_on_failed_rpc_init 00:04:53.314 ************************************ 00:04:53.314 10:59:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:53.314 10:59:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1846389 00:04:53.314 10:59:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1846389 00:04:53.314 10:59:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1846389 ']' 00:04:53.314 10:59:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.314 10:59:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.314 10:59:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.314 10:59:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.314 10:59:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.314 10:59:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.572 [2024-10-06 10:59:50.908989] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:04:53.572 [2024-10-06 10:59:50.909032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1846389 ] 00:04:53.572 [2024-10-06 10:59:50.963205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.572 [2024-10-06 10:59:51.003704] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:53.832 [2024-10-06 10:59:51.244973] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:04:53.832 [2024-10-06 10:59:51.245019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1846476 ] 00:04:53.832 [2024-10-06 10:59:51.299085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.832 [2024-10-06 10:59:51.337781] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.832 [2024-10-06 10:59:51.337843] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:53.832 [2024-10-06 10:59:51.337852] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:53.832 [2024-10-06 10:59:51.337858] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:53.832 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:53.833 10:59:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:53.833 10:59:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1846389 00:04:53.833 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1846389 ']' 00:04:53.833 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1846389 00:04:53.833 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:54.092 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.092 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1846389 00:04:54.092 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.092 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.092 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1846389' 00:04:54.092 killing process with pid 1846389 00:04:54.092 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1846389 00:04:54.092 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1846389 00:04:54.351 00:04:54.351 real 0m0.902s 00:04:54.351 user 0m0.964s 00:04:54.351 sys 0m0.374s 00:04:54.351 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.351 10:59:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.351 ************************************ 00:04:54.351 END TEST exit_on_failed_rpc_init 00:04:54.351 ************************************ 00:04:54.351 10:59:51 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:54.351 00:04:54.351 real 0m13.019s 00:04:54.351 user 0m12.267s 00:04:54.351 sys 0m1.516s 00:04:54.351 10:59:51 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.351 10:59:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.351 ************************************ 00:04:54.351 END TEST skip_rpc 00:04:54.351 ************************************ 00:04:54.351 10:59:51 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:54.351 10:59:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.351 10:59:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.351 10:59:51 -- common/autotest_common.sh@10 -- # set +x 00:04:54.351 ************************************ 00:04:54.351 START TEST rpc_client 00:04:54.351 ************************************ 00:04:54.351 10:59:51 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:54.611 * Looking for test storage... 00:04:54.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:54.611 10:59:51 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:54.611 10:59:51 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:54.611 10:59:51 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:54.611 10:59:52 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.611 10:59:52 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:54.611 10:59:52 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.611 10:59:52 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:54.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.611 --rc genhtml_branch_coverage=1 00:04:54.611 --rc genhtml_function_coverage=1 00:04:54.611 --rc genhtml_legend=1 00:04:54.611 --rc geninfo_all_blocks=1 00:04:54.611 --rc geninfo_unexecuted_blocks=1 00:04:54.611 00:04:54.611 ' 00:04:54.611 10:59:52 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:54.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.611 --rc genhtml_branch_coverage=1 00:04:54.611 --rc genhtml_function_coverage=1 00:04:54.611 --rc genhtml_legend=1 00:04:54.611 --rc geninfo_all_blocks=1 00:04:54.611 --rc geninfo_unexecuted_blocks=1 00:04:54.611 00:04:54.611 ' 00:04:54.611 10:59:52 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:54.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.611 --rc genhtml_branch_coverage=1 00:04:54.611 --rc genhtml_function_coverage=1 00:04:54.611 --rc genhtml_legend=1 00:04:54.611 --rc geninfo_all_blocks=1 00:04:54.611 --rc geninfo_unexecuted_blocks=1 00:04:54.611 00:04:54.611 ' 00:04:54.611 10:59:52 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:54.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.611 --rc genhtml_branch_coverage=1 00:04:54.611 --rc genhtml_function_coverage=1 00:04:54.611 --rc genhtml_legend=1 00:04:54.611 --rc geninfo_all_blocks=1 00:04:54.611 --rc geninfo_unexecuted_blocks=1 00:04:54.611 00:04:54.611 ' 00:04:54.611 10:59:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:54.611 OK 00:04:54.611 10:59:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:54.611 00:04:54.611 real 0m0.187s 00:04:54.611 user 0m0.113s 00:04:54.611 sys 0m0.086s 00:04:54.611 10:59:52 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.611 10:59:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:54.611 ************************************ 00:04:54.611 END TEST rpc_client 00:04:54.611 ************************************ 00:04:54.611 10:59:52 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:54.611 10:59:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.611 10:59:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.611 10:59:52 -- common/autotest_common.sh@10 -- # set +x 00:04:54.611 ************************************ 00:04:54.611 START TEST json_config 00:04:54.611 ************************************ 00:04:54.612 10:59:52 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:54.612 10:59:52 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:54.612 10:59:52 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:54.612 10:59:52 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:54.872 10:59:52 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:54.872 10:59:52 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.872 10:59:52 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.872 10:59:52 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.872 10:59:52 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.872 10:59:52 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.872 10:59:52 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.872 10:59:52 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.872 10:59:52 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.872 10:59:52 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.872 10:59:52 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.872 10:59:52 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.872 10:59:52 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:54.872 10:59:52 json_config -- scripts/common.sh@345 -- # : 1 00:04:54.872 10:59:52 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.872 10:59:52 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.872 10:59:52 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:54.872 10:59:52 json_config -- scripts/common.sh@353 -- # local d=1 00:04:54.872 10:59:52 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.872 10:59:52 json_config -- scripts/common.sh@355 -- # echo 1 00:04:54.872 10:59:52 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.872 10:59:52 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:54.872 10:59:52 json_config -- scripts/common.sh@353 -- # local d=2 00:04:54.872 10:59:52 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.872 10:59:52 json_config -- scripts/common.sh@355 -- # echo 2 00:04:54.872 10:59:52 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.872 10:59:52 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.872 10:59:52 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.872 10:59:52 json_config -- scripts/common.sh@368 -- # return 0 00:04:54.872 10:59:52 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.872 10:59:52 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:54.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.872 --rc genhtml_branch_coverage=1 00:04:54.872 --rc genhtml_function_coverage=1 00:04:54.872 --rc genhtml_legend=1 00:04:54.872 --rc geninfo_all_blocks=1 00:04:54.872 --rc geninfo_unexecuted_blocks=1 00:04:54.872 00:04:54.872 ' 00:04:54.872 10:59:52 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:54.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.872 --rc genhtml_branch_coverage=1 00:04:54.872 --rc genhtml_function_coverage=1 00:04:54.872 --rc genhtml_legend=1 00:04:54.872 --rc geninfo_all_blocks=1 00:04:54.872 --rc geninfo_unexecuted_blocks=1 00:04:54.872 00:04:54.872 ' 00:04:54.872 10:59:52 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:54.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.872 --rc genhtml_branch_coverage=1 00:04:54.872 --rc genhtml_function_coverage=1 00:04:54.872 --rc genhtml_legend=1 00:04:54.872 --rc geninfo_all_blocks=1 00:04:54.872 --rc geninfo_unexecuted_blocks=1 00:04:54.872 00:04:54.872 ' 00:04:54.872 10:59:52 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:54.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.872 --rc genhtml_branch_coverage=1 00:04:54.872 --rc genhtml_function_coverage=1 00:04:54.872 --rc genhtml_legend=1 00:04:54.872 --rc geninfo_all_blocks=1 00:04:54.872 --rc geninfo_unexecuted_blocks=1 00:04:54.872 00:04:54.872 ' 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:54.872 10:59:52 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:54.872 10:59:52 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:54.872 10:59:52 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:54.872 10:59:52 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:54.872 10:59:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.872 10:59:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.872 10:59:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.872 10:59:52 json_config -- paths/export.sh@5 -- # export PATH 00:04:54.872 10:59:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@51 -- # : 0 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:54.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:54.872 10:59:52 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:54.872 INFO: JSON configuration test init 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:54.872 10:59:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:54.872 10:59:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.872 10:59:52 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:54.872 10:59:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:54.872 10:59:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.873 10:59:52 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:54.873 10:59:52 json_config -- json_config/common.sh@9 -- # local app=target 00:04:54.873 10:59:52 json_config -- json_config/common.sh@10 -- # shift 00:04:54.873 10:59:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:54.873 10:59:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:54.873 10:59:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:54.873 10:59:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:54.873 10:59:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:54.873 10:59:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1846824 00:04:54.873 10:59:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:54.873 Waiting for target to run... 00:04:54.873 10:59:52 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:54.873 10:59:52 json_config -- json_config/common.sh@25 -- # waitforlisten 1846824 /var/tmp/spdk_tgt.sock 00:04:54.873 10:59:52 json_config -- common/autotest_common.sh@831 -- # '[' -z 1846824 ']' 00:04:54.873 10:59:52 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:54.873 10:59:52 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.873 10:59:52 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:54.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:54.873 10:59:52 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.873 10:59:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.873 [2024-10-06 10:59:52.343659] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:04:54.873 [2024-10-06 10:59:52.343706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1846824 ] 00:04:55.132 [2024-10-06 10:59:52.606042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.132 [2024-10-06 10:59:52.628952] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.701 10:59:53 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.701 10:59:53 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:55.701 10:59:53 json_config -- json_config/common.sh@26 -- # echo '' 00:04:55.701 00:04:55.701 10:59:53 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:55.701 10:59:53 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:55.701 10:59:53 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:55.701 10:59:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.701 10:59:53 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:55.701 10:59:53 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:55.701 10:59:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:55.701 10:59:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.701 10:59:53 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:55.701 10:59:53 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:55.701 10:59:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:58.993 10:59:56 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:58.993 10:59:56 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:58.994 10:59:56 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.994 10:59:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:58.994 10:59:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@54 -- # sort 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:58.994 10:59:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:58.994 10:59:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:58.994 10:59:56 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.994 10:59:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:58.994 10:59:56 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:58.994 10:59:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:59.252 MallocForNvmf0 00:04:59.252 10:59:56 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:59.252 10:59:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:59.511 MallocForNvmf1 00:04:59.511 10:59:56 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:59.511 10:59:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:59.511 [2024-10-06 10:59:57.082631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.769 10:59:57 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:59.769 10:59:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:59.769 10:59:57 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:59.769 10:59:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:00.029 10:59:57 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:00.029 10:59:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:00.287 10:59:57 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:00.287 10:59:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:00.287 [2024-10-06 10:59:57.848972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:00.547 10:59:57 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:00.547 10:59:57 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:00.547 10:59:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.547 10:59:57 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:00.547 10:59:57 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:00.547 10:59:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.547 10:59:57 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:00.547 10:59:57 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:00.547 10:59:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:00.547 MallocBdevForConfigChangeCheck 00:05:00.806 10:59:58 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:00.806 10:59:58 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:00.806 10:59:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.806 10:59:58 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:00.806 10:59:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:01.065 10:59:58 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:01.065 INFO: shutting down applications... 00:05:01.065 10:59:58 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:01.065 10:59:58 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:01.065 10:59:58 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:01.065 10:59:58 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:02.969 Calling clear_iscsi_subsystem 00:05:02.970 Calling clear_nvmf_subsystem 00:05:02.970 Calling clear_nbd_subsystem 00:05:02.970 Calling clear_ublk_subsystem 00:05:02.970 Calling clear_vhost_blk_subsystem 00:05:02.970 Calling clear_vhost_scsi_subsystem 00:05:02.970 Calling clear_bdev_subsystem 00:05:02.970 11:00:00 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:02.970 11:00:00 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:02.970 11:00:00 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:02.970 11:00:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.970 11:00:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:02.970 11:00:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:02.970 11:00:00 json_config -- json_config/json_config.sh@352 -- # break 00:05:02.970 11:00:00 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:02.970 11:00:00 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:02.970 11:00:00 json_config -- json_config/common.sh@31 -- # local app=target 00:05:02.970 11:00:00 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:02.970 11:00:00 json_config -- json_config/common.sh@35 -- # [[ -n 1846824 ]] 00:05:02.970 11:00:00 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1846824 00:05:02.970 11:00:00 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:02.970 11:00:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.970 11:00:00 json_config -- json_config/common.sh@41 -- # kill -0 1846824 00:05:02.970 11:00:00 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.539 11:00:00 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.539 11:00:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.539 11:00:00 json_config -- json_config/common.sh@41 -- # kill -0 1846824 00:05:03.539 11:00:00 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:03.539 11:00:00 json_config -- json_config/common.sh@43 -- # break 00:05:03.539 11:00:00 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:03.539 11:00:00 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:03.539 SPDK target shutdown done 00:05:03.539 11:00:00 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:03.539 INFO: relaunching applications... 00:05:03.539 11:00:00 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:03.539 11:00:00 json_config -- json_config/common.sh@9 -- # local app=target 00:05:03.539 11:00:00 json_config -- json_config/common.sh@10 -- # shift 00:05:03.539 11:00:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:03.539 11:00:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:03.539 11:00:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:03.539 11:00:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.539 11:00:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.539 11:00:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1848359 00:05:03.539 11:00:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:03.539 Waiting for target to run... 00:05:03.539 11:00:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:03.539 11:00:00 json_config -- json_config/common.sh@25 -- # waitforlisten 1848359 /var/tmp/spdk_tgt.sock 00:05:03.539 11:00:00 json_config -- common/autotest_common.sh@831 -- # '[' -z 1848359 ']' 00:05:03.539 11:00:00 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:03.539 11:00:00 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.539 11:00:00 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:03.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:03.539 11:00:00 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.539 11:00:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.539 [2024-10-06 11:00:01.001900] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:03.539 [2024-10-06 11:00:01.001961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1848359 ] 00:05:04.107 [2024-10-06 11:00:01.452609] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.107 [2024-10-06 11:00:01.481365] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.398 [2024-10-06 11:00:04.482634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:07.398 [2024-10-06 11:00:04.514929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:07.656 11:00:05 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:07.656 11:00:05 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:07.657 11:00:05 json_config -- json_config/common.sh@26 -- # echo '' 00:05:07.657 00:05:07.657 11:00:05 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:07.657 11:00:05 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:07.657 INFO: Checking if target configuration is the same... 00:05:07.657 11:00:05 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.657 11:00:05 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:07.657 11:00:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.657 + '[' 2 -ne 2 ']' 00:05:07.657 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:07.657 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:07.657 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:07.657 +++ basename /dev/fd/62 00:05:07.657 ++ mktemp /tmp/62.XXX 00:05:07.657 + tmp_file_1=/tmp/62.FwD 00:05:07.657 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.657 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:07.657 + tmp_file_2=/tmp/spdk_tgt_config.json.63O 00:05:07.657 + ret=0 00:05:07.657 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:08.225 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:08.225 + diff -u /tmp/62.FwD /tmp/spdk_tgt_config.json.63O 00:05:08.225 + echo 'INFO: JSON config files are the same' 00:05:08.225 INFO: JSON config files are the same 00:05:08.225 + rm /tmp/62.FwD /tmp/spdk_tgt_config.json.63O 00:05:08.225 + exit 0 00:05:08.225 11:00:05 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:08.225 11:00:05 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:08.225 INFO: changing configuration and checking if this can be detected... 00:05:08.225 11:00:05 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:08.225 11:00:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:08.225 11:00:05 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:08.225 11:00:05 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:08.226 11:00:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.226 + '[' 2 -ne 2 ']' 00:05:08.226 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:08.226 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:08.226 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:08.485 +++ basename /dev/fd/62 00:05:08.485 ++ mktemp /tmp/62.XXX 00:05:08.485 + tmp_file_1=/tmp/62.pp5 00:05:08.485 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:08.485 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:08.485 + tmp_file_2=/tmp/spdk_tgt_config.json.Rx9 00:05:08.485 + ret=0 00:05:08.485 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:08.744 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:08.744 + diff -u /tmp/62.pp5 /tmp/spdk_tgt_config.json.Rx9 00:05:08.744 + ret=1 00:05:08.744 + echo '=== Start of file: /tmp/62.pp5 ===' 00:05:08.744 + cat /tmp/62.pp5 00:05:08.744 + echo '=== End of file: /tmp/62.pp5 ===' 00:05:08.744 + echo '' 00:05:08.745 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Rx9 ===' 00:05:08.745 + cat /tmp/spdk_tgt_config.json.Rx9 00:05:08.745 + echo '=== End of file: /tmp/spdk_tgt_config.json.Rx9 ===' 00:05:08.745 + echo '' 00:05:08.745 + rm /tmp/62.pp5 /tmp/spdk_tgt_config.json.Rx9 00:05:08.745 + exit 1 00:05:08.745 11:00:06 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:08.745 INFO: configuration change detected. 00:05:08.745 11:00:06 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:08.745 11:00:06 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:08.745 11:00:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:08.745 11:00:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.745 11:00:06 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:08.745 11:00:06 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:08.745 11:00:06 json_config -- json_config/json_config.sh@324 -- # [[ -n 1848359 ]] 00:05:08.745 11:00:06 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:08.745 11:00:06 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:08.745 11:00:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:08.745 11:00:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.745 11:00:06 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:08.745 11:00:06 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:08.745 11:00:06 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:08.745 11:00:06 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:08.745 11:00:06 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:08.745 11:00:06 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:08.745 11:00:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:08.745 11:00:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.745 11:00:06 json_config -- json_config/json_config.sh@330 -- # killprocess 1848359 00:05:08.745 11:00:06 json_config -- common/autotest_common.sh@950 -- # '[' -z 1848359 ']' 00:05:08.745 11:00:06 json_config -- common/autotest_common.sh@954 -- # kill -0 1848359 00:05:08.745 11:00:06 json_config -- common/autotest_common.sh@955 -- # uname 00:05:08.745 11:00:06 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.745 11:00:06 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1848359 00:05:08.745 11:00:06 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:08.745 11:00:06 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:08.745 11:00:06 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1848359' 00:05:08.745 killing process with pid 1848359 00:05:08.745 11:00:06 json_config -- common/autotest_common.sh@969 -- # kill 1848359 00:05:08.745 11:00:06 json_config -- common/autotest_common.sh@974 -- # wait 1848359 00:05:10.650 11:00:07 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.650 11:00:07 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:10.650 11:00:07 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.650 11:00:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.650 11:00:07 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:10.650 11:00:07 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:10.650 INFO: Success 00:05:10.650 00:05:10.650 real 0m15.663s 00:05:10.650 user 0m16.798s 00:05:10.650 sys 0m1.889s 00:05:10.650 11:00:07 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.650 11:00:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.650 ************************************ 00:05:10.650 END TEST json_config 00:05:10.650 ************************************ 00:05:10.650 11:00:07 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:10.650 11:00:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.650 11:00:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.650 11:00:07 -- common/autotest_common.sh@10 -- # set +x 00:05:10.650 ************************************ 00:05:10.650 START TEST json_config_extra_key 00:05:10.650 ************************************ 00:05:10.650 11:00:07 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:10.650 11:00:07 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:10.650 11:00:07 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:10.650 11:00:07 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:10.650 11:00:07 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:10.650 11:00:07 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.650 11:00:07 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.650 11:00:07 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.650 11:00:07 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.650 11:00:07 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.650 11:00:07 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.650 11:00:07 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.650 11:00:07 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.650 11:00:07 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.650 11:00:07 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.650 11:00:07 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.650 11:00:07 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:10.650 11:00:07 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:10.650 11:00:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.651 11:00:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.651 11:00:07 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:10.651 11:00:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:10.651 11:00:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.651 11:00:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:10.651 11:00:07 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.651 11:00:07 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:10.651 11:00:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:10.651 11:00:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.651 11:00:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:10.651 11:00:07 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.651 11:00:07 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.651 11:00:07 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.651 11:00:07 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:10.651 11:00:07 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.651 11:00:07 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:10.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.651 --rc genhtml_branch_coverage=1 00:05:10.651 --rc genhtml_function_coverage=1 00:05:10.651 --rc genhtml_legend=1 00:05:10.651 --rc geninfo_all_blocks=1 00:05:10.651 --rc geninfo_unexecuted_blocks=1 00:05:10.651 00:05:10.651 ' 00:05:10.651 11:00:07 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:10.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.651 --rc genhtml_branch_coverage=1 00:05:10.651 --rc genhtml_function_coverage=1 00:05:10.651 --rc genhtml_legend=1 00:05:10.651 --rc geninfo_all_blocks=1 00:05:10.651 --rc geninfo_unexecuted_blocks=1 00:05:10.651 00:05:10.651 ' 00:05:10.651 11:00:07 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:10.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.651 --rc genhtml_branch_coverage=1 00:05:10.651 --rc genhtml_function_coverage=1 00:05:10.651 --rc genhtml_legend=1 00:05:10.651 --rc geninfo_all_blocks=1 00:05:10.651 --rc geninfo_unexecuted_blocks=1 00:05:10.651 00:05:10.651 ' 00:05:10.651 11:00:07 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:10.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.651 --rc genhtml_branch_coverage=1 00:05:10.651 --rc genhtml_function_coverage=1 00:05:10.651 --rc genhtml_legend=1 00:05:10.651 --rc geninfo_all_blocks=1 00:05:10.651 --rc geninfo_unexecuted_blocks=1 00:05:10.651 00:05:10.651 ' 00:05:10.651 11:00:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:10.651 11:00:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:10.651 11:00:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.651 11:00:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.651 11:00:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.651 11:00:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.651 11:00:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.651 11:00:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.651 11:00:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.651 11:00:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.651 11:00:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.651 11:00:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.651 11:00:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:10.651 11:00:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:10.651 11:00:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.651 11:00:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.651 11:00:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:10.651 11:00:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.651 11:00:08 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:10.651 11:00:08 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:10.651 11:00:08 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.651 11:00:08 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.651 11:00:08 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.651 11:00:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.651 11:00:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.651 11:00:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.651 11:00:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:10.651 11:00:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.651 11:00:08 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:10.651 11:00:08 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:10.651 11:00:08 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:10.651 11:00:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.651 11:00:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.651 11:00:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.651 11:00:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:10.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:10.651 11:00:08 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:10.651 11:00:08 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:10.651 11:00:08 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:10.651 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:10.651 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:10.651 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:10.651 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:10.651 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:10.651 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:10.651 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:10.651 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:10.651 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:10.651 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:10.651 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:10.651 INFO: launching applications... 00:05:10.651 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:10.651 11:00:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:10.651 11:00:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:10.651 11:00:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:10.651 11:00:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:10.651 11:00:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:10.651 11:00:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.651 11:00:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.651 11:00:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1849687 00:05:10.651 11:00:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:10.651 Waiting for target to run... 00:05:10.651 11:00:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1849687 /var/tmp/spdk_tgt.sock 00:05:10.651 11:00:08 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1849687 ']' 00:05:10.651 11:00:08 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:10.651 11:00:08 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:10.651 11:00:08 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.651 11:00:08 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:10.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:10.651 11:00:08 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.651 11:00:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:10.651 [2024-10-06 11:00:08.080668] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:10.652 [2024-10-06 11:00:08.080717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1849687 ] 00:05:11.240 [2024-10-06 11:00:08.518620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.240 [2024-10-06 11:00:08.550781] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.499 11:00:08 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.499 11:00:08 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:11.499 11:00:08 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:11.499 00:05:11.499 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:11.499 INFO: shutting down applications... 00:05:11.499 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:11.499 11:00:08 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:11.499 11:00:08 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:11.499 11:00:08 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1849687 ]] 00:05:11.499 11:00:08 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1849687 00:05:11.499 11:00:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:11.499 11:00:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.499 11:00:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1849687 00:05:11.499 11:00:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.067 11:00:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.067 11:00:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.067 11:00:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1849687 00:05:12.067 11:00:09 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:12.067 11:00:09 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:12.067 11:00:09 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:12.067 11:00:09 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:12.067 SPDK target shutdown done 00:05:12.067 11:00:09 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:12.067 Success 00:05:12.067 00:05:12.067 real 0m1.583s 00:05:12.067 user 0m1.220s 00:05:12.067 sys 0m0.579s 00:05:12.067 11:00:09 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.067 11:00:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:12.067 ************************************ 00:05:12.067 END TEST json_config_extra_key 00:05:12.067 ************************************ 00:05:12.067 11:00:09 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.067 11:00:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.067 11:00:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.067 11:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:12.067 ************************************ 00:05:12.067 START TEST alias_rpc 00:05:12.067 ************************************ 00:05:12.067 11:00:09 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.067 * Looking for test storage... 00:05:12.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:12.067 11:00:09 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:12.067 11:00:09 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:12.067 11:00:09 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:12.326 11:00:09 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.326 11:00:09 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:12.326 11:00:09 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.326 11:00:09 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:12.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.326 --rc genhtml_branch_coverage=1 00:05:12.326 --rc genhtml_function_coverage=1 00:05:12.326 --rc genhtml_legend=1 00:05:12.326 --rc geninfo_all_blocks=1 00:05:12.326 --rc geninfo_unexecuted_blocks=1 00:05:12.326 00:05:12.326 ' 00:05:12.326 11:00:09 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:12.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.326 --rc genhtml_branch_coverage=1 00:05:12.326 --rc genhtml_function_coverage=1 00:05:12.326 --rc genhtml_legend=1 00:05:12.326 --rc geninfo_all_blocks=1 00:05:12.326 --rc geninfo_unexecuted_blocks=1 00:05:12.326 00:05:12.326 ' 00:05:12.326 11:00:09 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:12.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.326 --rc genhtml_branch_coverage=1 00:05:12.326 --rc genhtml_function_coverage=1 00:05:12.327 --rc genhtml_legend=1 00:05:12.327 --rc geninfo_all_blocks=1 00:05:12.327 --rc geninfo_unexecuted_blocks=1 00:05:12.327 00:05:12.327 ' 00:05:12.327 11:00:09 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:12.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.327 --rc genhtml_branch_coverage=1 00:05:12.327 --rc genhtml_function_coverage=1 00:05:12.327 --rc genhtml_legend=1 00:05:12.327 --rc geninfo_all_blocks=1 00:05:12.327 --rc geninfo_unexecuted_blocks=1 00:05:12.327 00:05:12.327 ' 00:05:12.327 11:00:09 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:12.327 11:00:09 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1850527 00:05:12.327 11:00:09 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.327 11:00:09 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1850527 00:05:12.327 11:00:09 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1850527 ']' 00:05:12.327 11:00:09 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.327 11:00:09 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.327 11:00:09 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.327 11:00:09 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.327 11:00:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.327 [2024-10-06 11:00:09.728262] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:12.327 [2024-10-06 11:00:09.728312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1850527 ] 00:05:12.327 [2024-10-06 11:00:09.782291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.327 [2024-10-06 11:00:09.822367] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.585 11:00:10 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.585 11:00:10 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:12.585 11:00:10 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:12.844 11:00:10 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1850527 00:05:12.844 11:00:10 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1850527 ']' 00:05:12.844 11:00:10 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1850527 00:05:12.844 11:00:10 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:12.844 11:00:10 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:12.844 11:00:10 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1850527 00:05:12.844 11:00:10 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:12.844 11:00:10 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:12.844 11:00:10 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1850527' 00:05:12.844 killing process with pid 1850527 00:05:12.844 11:00:10 alias_rpc -- common/autotest_common.sh@969 -- # kill 1850527 00:05:12.844 11:00:10 alias_rpc -- common/autotest_common.sh@974 -- # wait 1850527 00:05:13.103 00:05:13.103 real 0m1.103s 00:05:13.103 user 0m1.124s 00:05:13.103 sys 0m0.408s 00:05:13.103 11:00:10 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.103 11:00:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.103 ************************************ 00:05:13.103 END TEST alias_rpc 00:05:13.103 ************************************ 00:05:13.103 11:00:10 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:13.103 11:00:10 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:13.103 11:00:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.103 11:00:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.103 11:00:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.103 ************************************ 00:05:13.103 START TEST spdkcli_tcp 00:05:13.103 ************************************ 00:05:13.103 11:00:10 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:13.363 * Looking for test storage... 00:05:13.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:13.363 11:00:10 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:13.363 11:00:10 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:13.363 11:00:10 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:13.363 11:00:10 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:13.363 11:00:10 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.364 11:00:10 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.364 11:00:10 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.364 11:00:10 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:13.364 11:00:10 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.364 11:00:10 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:13.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.364 --rc genhtml_branch_coverage=1 00:05:13.364 --rc genhtml_function_coverage=1 00:05:13.364 --rc genhtml_legend=1 00:05:13.364 --rc geninfo_all_blocks=1 00:05:13.364 --rc geninfo_unexecuted_blocks=1 00:05:13.364 00:05:13.364 ' 00:05:13.364 11:00:10 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:13.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.364 --rc genhtml_branch_coverage=1 00:05:13.364 --rc genhtml_function_coverage=1 00:05:13.364 --rc genhtml_legend=1 00:05:13.364 --rc geninfo_all_blocks=1 00:05:13.364 --rc geninfo_unexecuted_blocks=1 00:05:13.364 00:05:13.364 ' 00:05:13.364 11:00:10 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:13.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.364 --rc genhtml_branch_coverage=1 00:05:13.364 --rc genhtml_function_coverage=1 00:05:13.364 --rc genhtml_legend=1 00:05:13.364 --rc geninfo_all_blocks=1 00:05:13.364 --rc geninfo_unexecuted_blocks=1 00:05:13.364 00:05:13.364 ' 00:05:13.364 11:00:10 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:13.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.364 --rc genhtml_branch_coverage=1 00:05:13.364 --rc genhtml_function_coverage=1 00:05:13.364 --rc genhtml_legend=1 00:05:13.364 --rc geninfo_all_blocks=1 00:05:13.364 --rc geninfo_unexecuted_blocks=1 00:05:13.364 00:05:13.364 ' 00:05:13.364 11:00:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:13.364 11:00:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:13.364 11:00:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:13.364 11:00:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:13.364 11:00:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:13.364 11:00:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:13.364 11:00:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:13.364 11:00:10 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:13.364 11:00:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.364 11:00:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1850712 00:05:13.364 11:00:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1850712 00:05:13.364 11:00:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:13.364 11:00:10 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1850712 ']' 00:05:13.364 11:00:10 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.364 11:00:10 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.364 11:00:10 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.364 11:00:10 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.364 11:00:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.364 [2024-10-06 11:00:10.879512] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:13.364 [2024-10-06 11:00:10.879563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1850712 ] 00:05:13.364 [2024-10-06 11:00:10.935509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.623 [2024-10-06 11:00:10.975245] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.623 [2024-10-06 11:00:10.975247] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.623 11:00:11 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.623 11:00:11 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:13.623 11:00:11 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1850850 00:05:13.623 11:00:11 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:13.623 11:00:11 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:13.883 [ 00:05:13.883 "bdev_malloc_delete", 00:05:13.883 "bdev_malloc_create", 00:05:13.883 "bdev_null_resize", 00:05:13.883 "bdev_null_delete", 00:05:13.883 "bdev_null_create", 00:05:13.883 "bdev_nvme_cuse_unregister", 00:05:13.883 "bdev_nvme_cuse_register", 00:05:13.883 "bdev_opal_new_user", 00:05:13.883 "bdev_opal_set_lock_state", 00:05:13.883 "bdev_opal_delete", 00:05:13.883 "bdev_opal_get_info", 00:05:13.883 "bdev_opal_create", 00:05:13.883 "bdev_nvme_opal_revert", 00:05:13.883 "bdev_nvme_opal_init", 00:05:13.883 "bdev_nvme_send_cmd", 00:05:13.883 "bdev_nvme_set_keys", 00:05:13.883 "bdev_nvme_get_path_iostat", 00:05:13.883 "bdev_nvme_get_mdns_discovery_info", 00:05:13.883 "bdev_nvme_stop_mdns_discovery", 00:05:13.883 "bdev_nvme_start_mdns_discovery", 00:05:13.883 "bdev_nvme_set_multipath_policy", 00:05:13.883 "bdev_nvme_set_preferred_path", 00:05:13.883 "bdev_nvme_get_io_paths", 00:05:13.883 "bdev_nvme_remove_error_injection", 00:05:13.883 "bdev_nvme_add_error_injection", 00:05:13.883 "bdev_nvme_get_discovery_info", 00:05:13.883 "bdev_nvme_stop_discovery", 00:05:13.883 "bdev_nvme_start_discovery", 00:05:13.883 "bdev_nvme_get_controller_health_info", 00:05:13.883 "bdev_nvme_disable_controller", 00:05:13.883 "bdev_nvme_enable_controller", 00:05:13.883 "bdev_nvme_reset_controller", 00:05:13.883 "bdev_nvme_get_transport_statistics", 00:05:13.883 "bdev_nvme_apply_firmware", 00:05:13.883 "bdev_nvme_detach_controller", 00:05:13.883 "bdev_nvme_get_controllers", 00:05:13.883 "bdev_nvme_attach_controller", 00:05:13.883 "bdev_nvme_set_hotplug", 00:05:13.883 "bdev_nvme_set_options", 00:05:13.883 "bdev_passthru_delete", 00:05:13.883 "bdev_passthru_create", 00:05:13.883 "bdev_lvol_set_parent_bdev", 00:05:13.883 "bdev_lvol_set_parent", 00:05:13.883 "bdev_lvol_check_shallow_copy", 00:05:13.883 "bdev_lvol_start_shallow_copy", 00:05:13.883 "bdev_lvol_grow_lvstore", 00:05:13.883 "bdev_lvol_get_lvols", 00:05:13.883 "bdev_lvol_get_lvstores", 00:05:13.883 "bdev_lvol_delete", 00:05:13.883 "bdev_lvol_set_read_only", 00:05:13.883 "bdev_lvol_resize", 00:05:13.883 "bdev_lvol_decouple_parent", 00:05:13.883 "bdev_lvol_inflate", 00:05:13.883 "bdev_lvol_rename", 00:05:13.883 "bdev_lvol_clone_bdev", 00:05:13.883 "bdev_lvol_clone", 00:05:13.883 "bdev_lvol_snapshot", 00:05:13.883 "bdev_lvol_create", 00:05:13.883 "bdev_lvol_delete_lvstore", 00:05:13.883 "bdev_lvol_rename_lvstore", 00:05:13.883 "bdev_lvol_create_lvstore", 00:05:13.883 "bdev_raid_set_options", 00:05:13.883 "bdev_raid_remove_base_bdev", 00:05:13.883 "bdev_raid_add_base_bdev", 00:05:13.883 "bdev_raid_delete", 00:05:13.883 "bdev_raid_create", 00:05:13.883 "bdev_raid_get_bdevs", 00:05:13.883 "bdev_error_inject_error", 00:05:13.883 "bdev_error_delete", 00:05:13.883 "bdev_error_create", 00:05:13.883 "bdev_split_delete", 00:05:13.883 "bdev_split_create", 00:05:13.883 "bdev_delay_delete", 00:05:13.883 "bdev_delay_create", 00:05:13.883 "bdev_delay_update_latency", 00:05:13.883 "bdev_zone_block_delete", 00:05:13.883 "bdev_zone_block_create", 00:05:13.883 "blobfs_create", 00:05:13.883 "blobfs_detect", 00:05:13.883 "blobfs_set_cache_size", 00:05:13.883 "bdev_aio_delete", 00:05:13.883 "bdev_aio_rescan", 00:05:13.883 "bdev_aio_create", 00:05:13.883 "bdev_ftl_set_property", 00:05:13.883 "bdev_ftl_get_properties", 00:05:13.883 "bdev_ftl_get_stats", 00:05:13.883 "bdev_ftl_unmap", 00:05:13.883 "bdev_ftl_unload", 00:05:13.883 "bdev_ftl_delete", 00:05:13.883 "bdev_ftl_load", 00:05:13.883 "bdev_ftl_create", 00:05:13.883 "bdev_virtio_attach_controller", 00:05:13.883 "bdev_virtio_scsi_get_devices", 00:05:13.883 "bdev_virtio_detach_controller", 00:05:13.883 "bdev_virtio_blk_set_hotplug", 00:05:13.883 "bdev_iscsi_delete", 00:05:13.883 "bdev_iscsi_create", 00:05:13.883 "bdev_iscsi_set_options", 00:05:13.883 "accel_error_inject_error", 00:05:13.883 "ioat_scan_accel_module", 00:05:13.883 "dsa_scan_accel_module", 00:05:13.883 "iaa_scan_accel_module", 00:05:13.883 "vfu_virtio_create_fs_endpoint", 00:05:13.883 "vfu_virtio_create_scsi_endpoint", 00:05:13.883 "vfu_virtio_scsi_remove_target", 00:05:13.883 "vfu_virtio_scsi_add_target", 00:05:13.883 "vfu_virtio_create_blk_endpoint", 00:05:13.883 "vfu_virtio_delete_endpoint", 00:05:13.883 "keyring_file_remove_key", 00:05:13.883 "keyring_file_add_key", 00:05:13.883 "keyring_linux_set_options", 00:05:13.883 "fsdev_aio_delete", 00:05:13.883 "fsdev_aio_create", 00:05:13.883 "iscsi_get_histogram", 00:05:13.883 "iscsi_enable_histogram", 00:05:13.883 "iscsi_set_options", 00:05:13.883 "iscsi_get_auth_groups", 00:05:13.883 "iscsi_auth_group_remove_secret", 00:05:13.883 "iscsi_auth_group_add_secret", 00:05:13.883 "iscsi_delete_auth_group", 00:05:13.883 "iscsi_create_auth_group", 00:05:13.883 "iscsi_set_discovery_auth", 00:05:13.883 "iscsi_get_options", 00:05:13.883 "iscsi_target_node_request_logout", 00:05:13.883 "iscsi_target_node_set_redirect", 00:05:13.883 "iscsi_target_node_set_auth", 00:05:13.883 "iscsi_target_node_add_lun", 00:05:13.883 "iscsi_get_stats", 00:05:13.883 "iscsi_get_connections", 00:05:13.883 "iscsi_portal_group_set_auth", 00:05:13.883 "iscsi_start_portal_group", 00:05:13.883 "iscsi_delete_portal_group", 00:05:13.883 "iscsi_create_portal_group", 00:05:13.883 "iscsi_get_portal_groups", 00:05:13.883 "iscsi_delete_target_node", 00:05:13.883 "iscsi_target_node_remove_pg_ig_maps", 00:05:13.883 "iscsi_target_node_add_pg_ig_maps", 00:05:13.883 "iscsi_create_target_node", 00:05:13.883 "iscsi_get_target_nodes", 00:05:13.883 "iscsi_delete_initiator_group", 00:05:13.883 "iscsi_initiator_group_remove_initiators", 00:05:13.883 "iscsi_initiator_group_add_initiators", 00:05:13.883 "iscsi_create_initiator_group", 00:05:13.883 "iscsi_get_initiator_groups", 00:05:13.883 "nvmf_set_crdt", 00:05:13.883 "nvmf_set_config", 00:05:13.883 "nvmf_set_max_subsystems", 00:05:13.883 "nvmf_stop_mdns_prr", 00:05:13.883 "nvmf_publish_mdns_prr", 00:05:13.883 "nvmf_subsystem_get_listeners", 00:05:13.883 "nvmf_subsystem_get_qpairs", 00:05:13.883 "nvmf_subsystem_get_controllers", 00:05:13.883 "nvmf_get_stats", 00:05:13.883 "nvmf_get_transports", 00:05:13.883 "nvmf_create_transport", 00:05:13.883 "nvmf_get_targets", 00:05:13.883 "nvmf_delete_target", 00:05:13.883 "nvmf_create_target", 00:05:13.883 "nvmf_subsystem_allow_any_host", 00:05:13.883 "nvmf_subsystem_set_keys", 00:05:13.883 "nvmf_subsystem_remove_host", 00:05:13.883 "nvmf_subsystem_add_host", 00:05:13.883 "nvmf_ns_remove_host", 00:05:13.883 "nvmf_ns_add_host", 00:05:13.883 "nvmf_subsystem_remove_ns", 00:05:13.883 "nvmf_subsystem_set_ns_ana_group", 00:05:13.883 "nvmf_subsystem_add_ns", 00:05:13.883 "nvmf_subsystem_listener_set_ana_state", 00:05:13.883 "nvmf_discovery_get_referrals", 00:05:13.883 "nvmf_discovery_remove_referral", 00:05:13.883 "nvmf_discovery_add_referral", 00:05:13.883 "nvmf_subsystem_remove_listener", 00:05:13.883 "nvmf_subsystem_add_listener", 00:05:13.883 "nvmf_delete_subsystem", 00:05:13.883 "nvmf_create_subsystem", 00:05:13.883 "nvmf_get_subsystems", 00:05:13.883 "env_dpdk_get_mem_stats", 00:05:13.883 "nbd_get_disks", 00:05:13.883 "nbd_stop_disk", 00:05:13.883 "nbd_start_disk", 00:05:13.883 "ublk_recover_disk", 00:05:13.883 "ublk_get_disks", 00:05:13.883 "ublk_stop_disk", 00:05:13.883 "ublk_start_disk", 00:05:13.883 "ublk_destroy_target", 00:05:13.883 "ublk_create_target", 00:05:13.883 "virtio_blk_create_transport", 00:05:13.883 "virtio_blk_get_transports", 00:05:13.883 "vhost_controller_set_coalescing", 00:05:13.883 "vhost_get_controllers", 00:05:13.883 "vhost_delete_controller", 00:05:13.883 "vhost_create_blk_controller", 00:05:13.883 "vhost_scsi_controller_remove_target", 00:05:13.883 "vhost_scsi_controller_add_target", 00:05:13.883 "vhost_start_scsi_controller", 00:05:13.883 "vhost_create_scsi_controller", 00:05:13.883 "thread_set_cpumask", 00:05:13.883 "scheduler_set_options", 00:05:13.883 "framework_get_governor", 00:05:13.883 "framework_get_scheduler", 00:05:13.883 "framework_set_scheduler", 00:05:13.883 "framework_get_reactors", 00:05:13.883 "thread_get_io_channels", 00:05:13.883 "thread_get_pollers", 00:05:13.883 "thread_get_stats", 00:05:13.883 "framework_monitor_context_switch", 00:05:13.883 "spdk_kill_instance", 00:05:13.884 "log_enable_timestamps", 00:05:13.884 "log_get_flags", 00:05:13.884 "log_clear_flag", 00:05:13.884 "log_set_flag", 00:05:13.884 "log_get_level", 00:05:13.884 "log_set_level", 00:05:13.884 "log_get_print_level", 00:05:13.884 "log_set_print_level", 00:05:13.884 "framework_enable_cpumask_locks", 00:05:13.884 "framework_disable_cpumask_locks", 00:05:13.884 "framework_wait_init", 00:05:13.884 "framework_start_init", 00:05:13.884 "scsi_get_devices", 00:05:13.884 "bdev_get_histogram", 00:05:13.884 "bdev_enable_histogram", 00:05:13.884 "bdev_set_qos_limit", 00:05:13.884 "bdev_set_qd_sampling_period", 00:05:13.884 "bdev_get_bdevs", 00:05:13.884 "bdev_reset_iostat", 00:05:13.884 "bdev_get_iostat", 00:05:13.884 "bdev_examine", 00:05:13.884 "bdev_wait_for_examine", 00:05:13.884 "bdev_set_options", 00:05:13.884 "accel_get_stats", 00:05:13.884 "accel_set_options", 00:05:13.884 "accel_set_driver", 00:05:13.884 "accel_crypto_key_destroy", 00:05:13.884 "accel_crypto_keys_get", 00:05:13.884 "accel_crypto_key_create", 00:05:13.884 "accel_assign_opc", 00:05:13.884 "accel_get_module_info", 00:05:13.884 "accel_get_opc_assignments", 00:05:13.884 "vmd_rescan", 00:05:13.884 "vmd_remove_device", 00:05:13.884 "vmd_enable", 00:05:13.884 "sock_get_default_impl", 00:05:13.884 "sock_set_default_impl", 00:05:13.884 "sock_impl_set_options", 00:05:13.884 "sock_impl_get_options", 00:05:13.884 "iobuf_get_stats", 00:05:13.884 "iobuf_set_options", 00:05:13.884 "keyring_get_keys", 00:05:13.884 "vfu_tgt_set_base_path", 00:05:13.884 "framework_get_pci_devices", 00:05:13.884 "framework_get_config", 00:05:13.884 "framework_get_subsystems", 00:05:13.884 "fsdev_set_opts", 00:05:13.884 "fsdev_get_opts", 00:05:13.884 "trace_get_info", 00:05:13.884 "trace_get_tpoint_group_mask", 00:05:13.884 "trace_disable_tpoint_group", 00:05:13.884 "trace_enable_tpoint_group", 00:05:13.884 "trace_clear_tpoint_mask", 00:05:13.884 "trace_set_tpoint_mask", 00:05:13.884 "notify_get_notifications", 00:05:13.884 "notify_get_types", 00:05:13.884 "spdk_get_version", 00:05:13.884 "rpc_get_methods" 00:05:13.884 ] 00:05:13.884 11:00:11 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:13.884 11:00:11 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:13.884 11:00:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.884 11:00:11 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:13.884 11:00:11 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1850712 00:05:13.884 11:00:11 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1850712 ']' 00:05:13.884 11:00:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1850712 00:05:13.884 11:00:11 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:13.884 11:00:11 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.884 11:00:11 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1850712 00:05:13.884 11:00:11 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.884 11:00:11 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.884 11:00:11 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1850712' 00:05:13.884 killing process with pid 1850712 00:05:13.884 11:00:11 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1850712 00:05:13.884 11:00:11 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1850712 00:05:14.452 00:05:14.452 real 0m1.094s 00:05:14.452 user 0m1.812s 00:05:14.452 sys 0m0.444s 00:05:14.452 11:00:11 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.452 11:00:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.452 ************************************ 00:05:14.452 END TEST spdkcli_tcp 00:05:14.452 ************************************ 00:05:14.452 11:00:11 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:14.452 11:00:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.452 11:00:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.452 11:00:11 -- common/autotest_common.sh@10 -- # set +x 00:05:14.452 ************************************ 00:05:14.452 START TEST dpdk_mem_utility 00:05:14.452 ************************************ 00:05:14.452 11:00:11 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:14.452 * Looking for test storage... 00:05:14.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:14.452 11:00:11 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:14.452 11:00:11 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:14.452 11:00:11 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:14.452 11:00:11 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:14.452 11:00:11 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.452 11:00:11 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.452 11:00:11 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.452 11:00:11 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.452 11:00:11 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:14.453 11:00:11 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.453 11:00:12 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.453 11:00:12 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.453 11:00:12 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:14.453 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.453 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:14.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.453 --rc genhtml_branch_coverage=1 00:05:14.453 --rc genhtml_function_coverage=1 00:05:14.453 --rc genhtml_legend=1 00:05:14.453 --rc geninfo_all_blocks=1 00:05:14.453 --rc geninfo_unexecuted_blocks=1 00:05:14.453 00:05:14.453 ' 00:05:14.453 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:14.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.453 --rc genhtml_branch_coverage=1 00:05:14.453 --rc genhtml_function_coverage=1 00:05:14.453 --rc genhtml_legend=1 00:05:14.453 --rc geninfo_all_blocks=1 00:05:14.453 --rc geninfo_unexecuted_blocks=1 00:05:14.453 00:05:14.453 ' 00:05:14.453 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:14.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.453 --rc genhtml_branch_coverage=1 00:05:14.453 --rc genhtml_function_coverage=1 00:05:14.453 --rc genhtml_legend=1 00:05:14.453 --rc geninfo_all_blocks=1 00:05:14.453 --rc geninfo_unexecuted_blocks=1 00:05:14.453 00:05:14.453 ' 00:05:14.453 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:14.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.453 --rc genhtml_branch_coverage=1 00:05:14.453 --rc genhtml_function_coverage=1 00:05:14.453 --rc genhtml_legend=1 00:05:14.453 --rc geninfo_all_blocks=1 00:05:14.453 --rc geninfo_unexecuted_blocks=1 00:05:14.453 00:05:14.453 ' 00:05:14.453 11:00:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:14.453 11:00:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1850937 00:05:14.453 11:00:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.453 11:00:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1850937 00:05:14.453 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1850937 ']' 00:05:14.453 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.453 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.453 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.453 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.453 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:14.712 [2024-10-06 11:00:12.054071] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:14.712 [2024-10-06 11:00:12.054121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1850937 ] 00:05:14.712 [2024-10-06 11:00:12.109357] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.712 [2024-10-06 11:00:12.149669] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.971 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.971 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:14.971 11:00:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:14.971 11:00:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:14.971 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.971 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:14.971 { 00:05:14.972 "filename": "/tmp/spdk_mem_dump.txt" 00:05:14.972 } 00:05:14.972 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.972 11:00:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:14.972 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:14.972 1 heaps totaling size 860.000000 MiB 00:05:14.972 size: 860.000000 MiB heap id: 0 00:05:14.972 end heaps---------- 00:05:14.972 9 mempools totaling size 642.649841 MiB 00:05:14.972 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:14.972 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:14.972 size: 92.545471 MiB name: bdev_io_1850937 00:05:14.972 size: 51.011292 MiB name: evtpool_1850937 00:05:14.972 size: 50.003479 MiB name: msgpool_1850937 00:05:14.972 size: 36.509338 MiB name: fsdev_io_1850937 00:05:14.972 size: 21.763794 MiB name: PDU_Pool 00:05:14.972 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:14.972 size: 0.026123 MiB name: Session_Pool 00:05:14.972 end mempools------- 00:05:14.972 6 memzones totaling size 4.142822 MiB 00:05:14.972 size: 1.000366 MiB name: RG_ring_0_1850937 00:05:14.972 size: 1.000366 MiB name: RG_ring_1_1850937 00:05:14.972 size: 1.000366 MiB name: RG_ring_4_1850937 00:05:14.972 size: 1.000366 MiB name: RG_ring_5_1850937 00:05:14.972 size: 0.125366 MiB name: RG_ring_2_1850937 00:05:14.972 size: 0.015991 MiB name: RG_ring_3_1850937 00:05:14.972 end memzones------- 00:05:14.972 11:00:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:14.972 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:05:14.972 list of free elements. size: 13.984680 MiB 00:05:14.972 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:14.972 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:14.972 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:14.972 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:14.972 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:14.972 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:14.972 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:14.972 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:14.972 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:14.972 element at address: 0x20001d800000 with size: 0.582886 MiB 00:05:14.972 element at address: 0x200003e00000 with size: 0.495422 MiB 00:05:14.972 element at address: 0x20000d800000 with size: 0.490723 MiB 00:05:14.972 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:14.972 element at address: 0x200007000000 with size: 0.481934 MiB 00:05:14.972 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:05:14.972 element at address: 0x200003a00000 with size: 0.355042 MiB 00:05:14.972 list of standard malloc elements. size: 199.218628 MiB 00:05:14.972 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:14.972 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:14.972 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:14.972 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:14.972 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:14.972 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:14.972 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:14.972 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:14.972 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:14.972 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:14.972 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:14.972 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:14.972 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:14.972 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:14.972 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:14.972 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:14.972 element at address: 0x200003a5ae40 with size: 0.000183 MiB 00:05:14.972 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:14.972 element at address: 0x200003a5f300 with size: 0.000183 MiB 00:05:14.972 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:14.972 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:05:14.972 element at address: 0x200003aff940 with size: 0.000183 MiB 00:05:14.972 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:14.972 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:14.972 element at address: 0x200003eff000 with size: 0.000183 MiB 00:05:14.972 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:14.972 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:14.972 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:14.972 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:14.972 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:14.972 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:14.972 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:14.972 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:14.972 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:14.972 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:14.972 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:14.972 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:14.972 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:14.972 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:14.972 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:05:14.972 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:05:14.972 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:05:14.972 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:14.972 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:14.972 list of memzone associated elements. size: 646.796692 MiB 00:05:14.972 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:14.972 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:14.972 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:14.972 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:14.972 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:14.972 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1850937_0 00:05:14.972 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:14.972 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1850937_0 00:05:14.972 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:14.972 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1850937_0 00:05:14.972 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:14.972 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1850937_0 00:05:14.972 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:14.972 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:14.972 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:14.972 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:14.972 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:14.972 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1850937 00:05:14.972 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:14.972 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1850937 00:05:14.972 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:14.972 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1850937 00:05:14.972 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:14.972 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:14.972 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:14.972 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:14.972 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:14.972 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:14.972 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:14.972 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:14.972 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:14.972 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1850937 00:05:14.972 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:14.972 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1850937 00:05:14.972 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:14.972 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1850937 00:05:14.972 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:14.972 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1850937 00:05:14.972 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:05:14.972 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1850937 00:05:14.972 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:05:14.972 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1850937 00:05:14.972 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:14.972 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:14.972 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:14.972 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:14.972 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:14.972 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:14.972 element at address: 0x200003a5f3c0 with size: 0.125488 MiB 00:05:14.972 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1850937 00:05:14.972 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:14.972 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:14.972 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:05:14.972 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:14.972 element at address: 0x200003a5b100 with size: 0.016113 MiB 00:05:14.972 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1850937 00:05:14.972 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:05:14.972 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:14.972 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:14.972 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1850937 00:05:14.973 element at address: 0x200003affa00 with size: 0.000305 MiB 00:05:14.973 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1850937 00:05:14.973 element at address: 0x200003a5af00 with size: 0.000305 MiB 00:05:14.973 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1850937 00:05:14.973 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:05:14.973 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:14.973 11:00:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:14.973 11:00:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1850937 00:05:14.973 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1850937 ']' 00:05:14.973 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1850937 00:05:14.973 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:14.973 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.973 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1850937 00:05:14.973 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.973 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.973 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1850937' 00:05:14.973 killing process with pid 1850937 00:05:14.973 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1850937 00:05:14.973 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1850937 00:05:15.541 00:05:15.541 real 0m0.993s 00:05:15.541 user 0m0.924s 00:05:15.541 sys 0m0.405s 00:05:15.541 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.541 11:00:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.541 ************************************ 00:05:15.541 END TEST dpdk_mem_utility 00:05:15.541 ************************************ 00:05:15.541 11:00:12 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:15.541 11:00:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.541 11:00:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.541 11:00:12 -- common/autotest_common.sh@10 -- # set +x 00:05:15.541 ************************************ 00:05:15.541 START TEST event 00:05:15.541 ************************************ 00:05:15.541 11:00:12 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:15.541 * Looking for test storage... 00:05:15.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:15.541 11:00:12 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:15.541 11:00:12 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:15.541 11:00:12 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:15.542 11:00:13 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:15.542 11:00:13 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.542 11:00:13 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.542 11:00:13 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.542 11:00:13 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.542 11:00:13 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.542 11:00:13 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.542 11:00:13 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.542 11:00:13 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.542 11:00:13 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.542 11:00:13 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.542 11:00:13 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.542 11:00:13 event -- scripts/common.sh@344 -- # case "$op" in 00:05:15.542 11:00:13 event -- scripts/common.sh@345 -- # : 1 00:05:15.542 11:00:13 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.542 11:00:13 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.542 11:00:13 event -- scripts/common.sh@365 -- # decimal 1 00:05:15.542 11:00:13 event -- scripts/common.sh@353 -- # local d=1 00:05:15.542 11:00:13 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.542 11:00:13 event -- scripts/common.sh@355 -- # echo 1 00:05:15.542 11:00:13 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.542 11:00:13 event -- scripts/common.sh@366 -- # decimal 2 00:05:15.542 11:00:13 event -- scripts/common.sh@353 -- # local d=2 00:05:15.542 11:00:13 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.542 11:00:13 event -- scripts/common.sh@355 -- # echo 2 00:05:15.542 11:00:13 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.542 11:00:13 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.542 11:00:13 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.542 11:00:13 event -- scripts/common.sh@368 -- # return 0 00:05:15.542 11:00:13 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.542 11:00:13 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:15.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.542 --rc genhtml_branch_coverage=1 00:05:15.542 --rc genhtml_function_coverage=1 00:05:15.542 --rc genhtml_legend=1 00:05:15.542 --rc geninfo_all_blocks=1 00:05:15.542 --rc geninfo_unexecuted_blocks=1 00:05:15.542 00:05:15.542 ' 00:05:15.542 11:00:13 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:15.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.542 --rc genhtml_branch_coverage=1 00:05:15.542 --rc genhtml_function_coverage=1 00:05:15.542 --rc genhtml_legend=1 00:05:15.542 --rc geninfo_all_blocks=1 00:05:15.542 --rc geninfo_unexecuted_blocks=1 00:05:15.542 00:05:15.542 ' 00:05:15.542 11:00:13 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:15.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.542 --rc genhtml_branch_coverage=1 00:05:15.542 --rc genhtml_function_coverage=1 00:05:15.542 --rc genhtml_legend=1 00:05:15.542 --rc geninfo_all_blocks=1 00:05:15.542 --rc geninfo_unexecuted_blocks=1 00:05:15.542 00:05:15.542 ' 00:05:15.542 11:00:13 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:15.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.542 --rc genhtml_branch_coverage=1 00:05:15.542 --rc genhtml_function_coverage=1 00:05:15.542 --rc genhtml_legend=1 00:05:15.542 --rc geninfo_all_blocks=1 00:05:15.542 --rc geninfo_unexecuted_blocks=1 00:05:15.542 00:05:15.542 ' 00:05:15.542 11:00:13 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:15.542 11:00:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:15.542 11:00:13 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:15.542 11:00:13 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:15.542 11:00:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.542 11:00:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.542 ************************************ 00:05:15.542 START TEST event_perf 00:05:15.542 ************************************ 00:05:15.542 11:00:13 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:15.542 Running I/O for 1 seconds...[2024-10-06 11:00:13.094794] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:15.542 [2024-10-06 11:00:13.094863] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851225 ] 00:05:15.802 [2024-10-06 11:00:13.154044] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:15.802 [2024-10-06 11:00:13.195730] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.802 [2024-10-06 11:00:13.195832] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.802 [2024-10-06 11:00:13.195896] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.802 [2024-10-06 11:00:13.195898] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.739 Running I/O for 1 seconds... 00:05:16.739 lcore 0: 208271 00:05:16.739 lcore 1: 208271 00:05:16.739 lcore 2: 208271 00:05:16.739 lcore 3: 208271 00:05:16.739 done. 00:05:16.739 00:05:16.739 real 0m1.178s 00:05:16.739 user 0m4.091s 00:05:16.739 sys 0m0.085s 00:05:16.739 11:00:14 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.739 11:00:14 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.739 ************************************ 00:05:16.739 END TEST event_perf 00:05:16.739 ************************************ 00:05:16.739 11:00:14 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:16.739 11:00:14 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:16.739 11:00:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.739 11:00:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.998 ************************************ 00:05:16.998 START TEST event_reactor 00:05:16.998 ************************************ 00:05:16.998 11:00:14 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:16.998 [2024-10-06 11:00:14.327475] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:16.998 [2024-10-06 11:00:14.327527] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851469 ] 00:05:16.998 [2024-10-06 11:00:14.380839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.998 [2024-10-06 11:00:14.418528] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.935 test_start 00:05:17.935 oneshot 00:05:17.935 tick 100 00:05:17.935 tick 100 00:05:17.935 tick 250 00:05:17.935 tick 100 00:05:17.935 tick 100 00:05:17.935 tick 100 00:05:17.935 tick 250 00:05:17.935 tick 500 00:05:17.935 tick 100 00:05:17.935 tick 100 00:05:17.935 tick 250 00:05:17.935 tick 100 00:05:17.935 tick 100 00:05:17.935 test_end 00:05:17.935 00:05:17.935 real 0m1.159s 00:05:17.935 user 0m1.086s 00:05:17.936 sys 0m0.070s 00:05:17.936 11:00:15 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.936 11:00:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:17.936 ************************************ 00:05:17.936 END TEST event_reactor 00:05:17.936 ************************************ 00:05:17.936 11:00:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:17.936 11:00:15 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:17.936 11:00:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.936 11:00:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.195 ************************************ 00:05:18.195 START TEST event_reactor_perf 00:05:18.195 ************************************ 00:05:18.195 11:00:15 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.195 [2024-10-06 11:00:15.558320] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:18.195 [2024-10-06 11:00:15.558391] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851709 ] 00:05:18.195 [2024-10-06 11:00:15.617616] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.195 [2024-10-06 11:00:15.655371] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.595 test_start 00:05:19.595 test_end 00:05:19.595 Performance: 523327 events per second 00:05:19.595 00:05:19.595 real 0m1.173s 00:05:19.595 user 0m1.094s 00:05:19.595 sys 0m0.075s 00:05:19.595 11:00:16 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.595 11:00:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:19.595 ************************************ 00:05:19.595 END TEST event_reactor_perf 00:05:19.595 ************************************ 00:05:19.595 11:00:16 event -- event/event.sh@49 -- # uname -s 00:05:19.595 11:00:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:19.595 11:00:16 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:19.595 11:00:16 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.595 11:00:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.595 11:00:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.595 ************************************ 00:05:19.595 START TEST event_scheduler 00:05:19.595 ************************************ 00:05:19.595 11:00:16 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:19.595 * Looking for test storage... 00:05:19.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:19.595 11:00:16 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:19.595 11:00:16 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:19.595 11:00:16 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:19.595 11:00:16 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.595 11:00:16 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:19.595 11:00:16 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.596 11:00:16 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:19.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.596 --rc genhtml_branch_coverage=1 00:05:19.596 --rc genhtml_function_coverage=1 00:05:19.596 --rc genhtml_legend=1 00:05:19.596 --rc geninfo_all_blocks=1 00:05:19.596 --rc geninfo_unexecuted_blocks=1 00:05:19.596 00:05:19.596 ' 00:05:19.596 11:00:16 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:19.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.596 --rc genhtml_branch_coverage=1 00:05:19.596 --rc genhtml_function_coverage=1 00:05:19.596 --rc genhtml_legend=1 00:05:19.596 --rc geninfo_all_blocks=1 00:05:19.596 --rc geninfo_unexecuted_blocks=1 00:05:19.596 00:05:19.596 ' 00:05:19.596 11:00:16 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:19.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.596 --rc genhtml_branch_coverage=1 00:05:19.596 --rc genhtml_function_coverage=1 00:05:19.596 --rc genhtml_legend=1 00:05:19.596 --rc geninfo_all_blocks=1 00:05:19.596 --rc geninfo_unexecuted_blocks=1 00:05:19.596 00:05:19.596 ' 00:05:19.596 11:00:16 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:19.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.596 --rc genhtml_branch_coverage=1 00:05:19.596 --rc genhtml_function_coverage=1 00:05:19.596 --rc genhtml_legend=1 00:05:19.596 --rc geninfo_all_blocks=1 00:05:19.596 --rc geninfo_unexecuted_blocks=1 00:05:19.596 00:05:19.596 ' 00:05:19.596 11:00:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:19.596 11:00:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1851988 00:05:19.596 11:00:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:19.596 11:00:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.596 11:00:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1851988 00:05:19.596 11:00:16 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1851988 ']' 00:05:19.596 11:00:16 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.596 11:00:16 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.596 11:00:16 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.596 11:00:16 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.596 11:00:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.596 [2024-10-06 11:00:16.996746] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:19.596 [2024-10-06 11:00:16.996794] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851988 ] 00:05:19.596 [2024-10-06 11:00:17.046455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:19.596 [2024-10-06 11:00:17.087244] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.596 [2024-10-06 11:00:17.087336] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.596 [2024-10-06 11:00:17.087404] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.596 [2024-10-06 11:00:17.087405] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.596 11:00:17 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.596 11:00:17 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:19.596 11:00:17 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:19.596 11:00:17 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.596 11:00:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.892 [2024-10-06 11:00:17.151987] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:19.892 [2024-10-06 11:00:17.152005] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:19.892 [2024-10-06 11:00:17.152014] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:19.892 [2024-10-06 11:00:17.152021] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:19.892 [2024-10-06 11:00:17.152026] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:19.892 11:00:17 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.892 11:00:17 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:19.892 11:00:17 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.892 11:00:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.892 [2024-10-06 11:00:17.219713] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:19.892 11:00:17 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.892 11:00:17 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:19.892 11:00:17 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.892 11:00:17 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.892 11:00:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.892 ************************************ 00:05:19.892 START TEST scheduler_create_thread 00:05:19.892 ************************************ 00:05:19.892 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:19.892 11:00:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:19.892 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.892 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.892 2 00:05:19.892 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.892 11:00:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:19.892 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.893 3 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.893 4 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.893 5 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.893 6 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.893 7 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.893 8 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.893 9 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.893 10 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.893 11:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.902 11:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.902 11:00:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:20.902 11:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.902 11:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.280 11:00:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.280 11:00:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:22.280 11:00:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:22.280 11:00:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.280 11:00:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.219 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.219 00:05:23.219 real 0m3.382s 00:05:23.219 user 0m0.021s 00:05:23.219 sys 0m0.009s 00:05:23.219 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.219 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.219 ************************************ 00:05:23.219 END TEST scheduler_create_thread 00:05:23.219 ************************************ 00:05:23.219 11:00:20 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:23.219 11:00:20 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1851988 00:05:23.219 11:00:20 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1851988 ']' 00:05:23.219 11:00:20 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1851988 00:05:23.219 11:00:20 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:23.219 11:00:20 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:23.219 11:00:20 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1851988 00:05:23.219 11:00:20 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:23.219 11:00:20 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:23.219 11:00:20 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1851988' 00:05:23.219 killing process with pid 1851988 00:05:23.219 11:00:20 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1851988 00:05:23.219 11:00:20 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1851988 00:05:23.478 [2024-10-06 11:00:21.015877] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:23.738 00:05:23.738 real 0m4.459s 00:05:23.738 user 0m7.862s 00:05:23.738 sys 0m0.333s 00:05:23.738 11:00:21 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.738 11:00:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.738 ************************************ 00:05:23.738 END TEST event_scheduler 00:05:23.738 ************************************ 00:05:23.738 11:00:21 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:23.738 11:00:21 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:23.738 11:00:21 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.738 11:00:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.738 11:00:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.998 ************************************ 00:05:23.998 START TEST app_repeat 00:05:23.998 ************************************ 00:05:23.998 11:00:21 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:23.998 11:00:21 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.998 11:00:21 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.998 11:00:21 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:23.998 11:00:21 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.998 11:00:21 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:23.998 11:00:21 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:23.998 11:00:21 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:23.998 11:00:21 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1852725 00:05:23.998 11:00:21 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.998 11:00:21 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:23.998 11:00:21 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1852725' 00:05:23.998 Process app_repeat pid: 1852725 00:05:23.998 11:00:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:23.998 11:00:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:23.998 spdk_app_start Round 0 00:05:23.998 11:00:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1852725 /var/tmp/spdk-nbd.sock 00:05:23.998 11:00:21 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1852725 ']' 00:05:23.998 11:00:21 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.998 11:00:21 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.998 11:00:21 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.998 11:00:21 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.998 11:00:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.998 [2024-10-06 11:00:21.353160] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:23.998 [2024-10-06 11:00:21.353213] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1852725 ] 00:05:23.998 [2024-10-06 11:00:21.411880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.998 [2024-10-06 11:00:21.451118] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.998 [2024-10-06 11:00:21.451119] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.998 11:00:21 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.998 11:00:21 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:23.998 11:00:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.257 Malloc0 00:05:24.257 11:00:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.517 Malloc1 00:05:24.517 11:00:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.517 11:00:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.517 11:00:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.517 11:00:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.517 11:00:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.517 11:00:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.517 11:00:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.517 11:00:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.517 11:00:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.517 11:00:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.517 11:00:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.517 11:00:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.517 11:00:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:24.517 11:00:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.517 11:00:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.517 11:00:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.776 /dev/nbd0 00:05:24.776 11:00:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.776 11:00:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.776 11:00:22 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:24.776 11:00:22 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:24.776 11:00:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:24.776 11:00:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:24.776 11:00:22 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:24.776 11:00:22 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:24.776 11:00:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:24.776 11:00:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:24.776 11:00:22 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.776 1+0 records in 00:05:24.776 1+0 records out 00:05:24.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324579 s, 12.6 MB/s 00:05:24.776 11:00:22 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.776 11:00:22 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:24.776 11:00:22 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.776 11:00:22 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:24.776 11:00:22 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:24.776 11:00:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.776 11:00:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.776 11:00:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.035 /dev/nbd1 00:05:25.035 11:00:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.035 11:00:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.035 11:00:22 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:25.035 11:00:22 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:25.035 11:00:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:25.035 11:00:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:25.035 11:00:22 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:25.035 11:00:22 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:25.035 11:00:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:25.035 11:00:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:25.035 11:00:22 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.035 1+0 records in 00:05:25.035 1+0 records out 00:05:25.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187762 s, 21.8 MB/s 00:05:25.035 11:00:22 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.035 11:00:22 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:25.035 11:00:22 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.035 11:00:22 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:25.035 11:00:22 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:25.035 11:00:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.035 11:00:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.035 11:00:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.035 11:00:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.035 11:00:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.035 11:00:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.035 { 00:05:25.035 "nbd_device": "/dev/nbd0", 00:05:25.035 "bdev_name": "Malloc0" 00:05:25.035 }, 00:05:25.035 { 00:05:25.035 "nbd_device": "/dev/nbd1", 00:05:25.035 "bdev_name": "Malloc1" 00:05:25.035 } 00:05:25.035 ]' 00:05:25.035 11:00:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.035 { 00:05:25.035 "nbd_device": "/dev/nbd0", 00:05:25.035 "bdev_name": "Malloc0" 00:05:25.035 }, 00:05:25.035 { 00:05:25.035 "nbd_device": "/dev/nbd1", 00:05:25.035 "bdev_name": "Malloc1" 00:05:25.035 } 00:05:25.035 ]' 00:05:25.035 11:00:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.295 /dev/nbd1' 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.295 /dev/nbd1' 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.295 256+0 records in 00:05:25.295 256+0 records out 00:05:25.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101632 s, 103 MB/s 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.295 256+0 records in 00:05:25.295 256+0 records out 00:05:25.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135723 s, 77.3 MB/s 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.295 256+0 records in 00:05:25.295 256+0 records out 00:05:25.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138441 s, 75.7 MB/s 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.295 11:00:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.552 11:00:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.552 11:00:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.552 11:00:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.552 11:00:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.552 11:00:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.552 11:00:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.552 11:00:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.552 11:00:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.552 11:00:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.552 11:00:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.552 11:00:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.552 11:00:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.552 11:00:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.552 11:00:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.552 11:00:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.552 11:00:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.552 11:00:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.872 11:00:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.872 11:00:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.872 11:00:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.872 11:00:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.872 11:00:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.872 11:00:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.872 11:00:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.872 11:00:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.872 11:00:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.872 11:00:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.872 11:00:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.872 11:00:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.872 11:00:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.872 11:00:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.872 11:00:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.872 11:00:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.872 11:00:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.131 11:00:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:26.390 [2024-10-06 11:00:23.728220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.390 [2024-10-06 11:00:23.763776] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.390 [2024-10-06 11:00:23.763779] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.390 [2024-10-06 11:00:23.803798] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.390 [2024-10-06 11:00:23.803838] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.680 11:00:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.680 11:00:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:29.680 spdk_app_start Round 1 00:05:29.680 11:00:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1852725 /var/tmp/spdk-nbd.sock 00:05:29.680 11:00:26 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1852725 ']' 00:05:29.680 11:00:26 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.680 11:00:26 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.680 11:00:26 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.680 11:00:26 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.680 11:00:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.680 11:00:26 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.680 11:00:26 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:29.680 11:00:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.680 Malloc0 00:05:29.680 11:00:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.680 Malloc1 00:05:29.680 11:00:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.680 11:00:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.680 11:00:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.680 11:00:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.680 11:00:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.680 11:00:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.680 11:00:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.680 11:00:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.680 11:00:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.680 11:00:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.680 11:00:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.680 11:00:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.680 11:00:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:29.680 11:00:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.680 11:00:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.680 11:00:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.939 /dev/nbd0 00:05:29.939 11:00:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.939 11:00:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.939 11:00:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:29.939 11:00:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:29.939 11:00:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:29.939 11:00:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:29.939 11:00:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:29.939 11:00:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:29.939 11:00:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:29.939 11:00:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:29.939 11:00:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.939 1+0 records in 00:05:29.939 1+0 records out 00:05:29.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184657 s, 22.2 MB/s 00:05:29.939 11:00:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.940 11:00:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:29.940 11:00:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.940 11:00:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:29.940 11:00:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:29.940 11:00:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.940 11:00:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.940 11:00:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.199 /dev/nbd1 00:05:30.199 11:00:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.199 11:00:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.199 11:00:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:30.199 11:00:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:30.199 11:00:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:30.199 11:00:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:30.199 11:00:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:30.199 11:00:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:30.199 11:00:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:30.199 11:00:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:30.199 11:00:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.199 1+0 records in 00:05:30.199 1+0 records out 00:05:30.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191812 s, 21.4 MB/s 00:05:30.199 11:00:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.199 11:00:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:30.199 11:00:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.199 11:00:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:30.199 11:00:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:30.199 11:00:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.199 11:00:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.199 11:00:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.199 11:00:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.199 11:00:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.459 11:00:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.459 { 00:05:30.459 "nbd_device": "/dev/nbd0", 00:05:30.459 "bdev_name": "Malloc0" 00:05:30.459 }, 00:05:30.459 { 00:05:30.459 "nbd_device": "/dev/nbd1", 00:05:30.459 "bdev_name": "Malloc1" 00:05:30.459 } 00:05:30.459 ]' 00:05:30.459 11:00:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.459 { 00:05:30.459 "nbd_device": "/dev/nbd0", 00:05:30.459 "bdev_name": "Malloc0" 00:05:30.459 }, 00:05:30.459 { 00:05:30.459 "nbd_device": "/dev/nbd1", 00:05:30.459 "bdev_name": "Malloc1" 00:05:30.459 } 00:05:30.459 ]' 00:05:30.459 11:00:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.459 11:00:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.459 /dev/nbd1' 00:05:30.459 11:00:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.459 /dev/nbd1' 00:05:30.459 11:00:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.459 11:00:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.459 11:00:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.459 11:00:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.459 11:00:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.459 11:00:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.459 11:00:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.459 11:00:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.459 11:00:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.459 11:00:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.459 11:00:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.459 11:00:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.459 256+0 records in 00:05:30.459 256+0 records out 00:05:30.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101612 s, 103 MB/s 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.460 256+0 records in 00:05:30.460 256+0 records out 00:05:30.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130784 s, 80.2 MB/s 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.460 256+0 records in 00:05:30.460 256+0 records out 00:05:30.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146457 s, 71.6 MB/s 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.460 11:00:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.719 11:00:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.719 11:00:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.719 11:00:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.719 11:00:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.719 11:00:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.719 11:00:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.719 11:00:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.719 11:00:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.719 11:00:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.719 11:00:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:30.978 11:00:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:30.978 11:00:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:30.978 11:00:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:30.978 11:00:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.978 11:00:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.978 11:00:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:30.978 11:00:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.978 11:00:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.978 11:00:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.978 11:00:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.978 11:00:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.237 11:00:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.237 11:00:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.237 11:00:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.237 11:00:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.237 11:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.237 11:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.237 11:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.238 11:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.238 11:00:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.238 11:00:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.238 11:00:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.238 11:00:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.238 11:00:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.238 11:00:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.496 [2024-10-06 11:00:28.977123] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.496 [2024-10-06 11:00:29.012622] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.497 [2024-10-06 11:00:29.012625] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.497 [2024-10-06 11:00:29.053489] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.497 [2024-10-06 11:00:29.053531] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.784 11:00:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.784 11:00:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:34.784 spdk_app_start Round 2 00:05:34.784 11:00:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1852725 /var/tmp/spdk-nbd.sock 00:05:34.784 11:00:31 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1852725 ']' 00:05:34.784 11:00:31 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.784 11:00:31 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.784 11:00:31 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.784 11:00:31 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.785 11:00:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.785 11:00:31 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.785 11:00:31 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:34.785 11:00:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.785 Malloc0 00:05:34.785 11:00:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.043 Malloc1 00:05:35.043 11:00:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.043 11:00:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.043 11:00:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.043 11:00:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.043 11:00:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.043 11:00:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.043 11:00:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.043 11:00:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.043 11:00:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.043 11:00:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.043 11:00:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.043 11:00:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.043 11:00:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.043 11:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.043 11:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.043 11:00:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.043 /dev/nbd0 00:05:35.302 11:00:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.302 11:00:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.302 1+0 records in 00:05:35.302 1+0 records out 00:05:35.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259073 s, 15.8 MB/s 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:35.302 11:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.302 11:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.302 11:00:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.302 /dev/nbd1 00:05:35.302 11:00:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.302 11:00:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:35.302 11:00:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.561 1+0 records in 00:05:35.561 1+0 records out 00:05:35.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201546 s, 20.3 MB/s 00:05:35.561 11:00:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.561 11:00:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:35.561 11:00:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.561 11:00:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:35.561 11:00:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:35.561 11:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.561 11:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.561 11:00:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.561 11:00:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.561 11:00:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.561 11:00:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:35.561 { 00:05:35.561 "nbd_device": "/dev/nbd0", 00:05:35.561 "bdev_name": "Malloc0" 00:05:35.561 }, 00:05:35.561 { 00:05:35.561 "nbd_device": "/dev/nbd1", 00:05:35.561 "bdev_name": "Malloc1" 00:05:35.561 } 00:05:35.561 ]' 00:05:35.561 11:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:35.561 { 00:05:35.561 "nbd_device": "/dev/nbd0", 00:05:35.561 "bdev_name": "Malloc0" 00:05:35.561 }, 00:05:35.561 { 00:05:35.561 "nbd_device": "/dev/nbd1", 00:05:35.561 "bdev_name": "Malloc1" 00:05:35.561 } 00:05:35.561 ]' 00:05:35.561 11:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.561 11:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:35.561 /dev/nbd1' 00:05:35.561 11:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:35.561 /dev/nbd1' 00:05:35.561 11:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.561 11:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:35.561 11:00:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:35.561 11:00:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:35.561 11:00:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:35.561 11:00:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:35.561 11:00:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.561 11:00:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.561 11:00:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:35.561 11:00:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.561 11:00:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:35.561 11:00:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:35.820 256+0 records in 00:05:35.820 256+0 records out 00:05:35.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104178 s, 101 MB/s 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:35.820 256+0 records in 00:05:35.820 256+0 records out 00:05:35.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141232 s, 74.2 MB/s 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.820 256+0 records in 00:05:35.820 256+0 records out 00:05:35.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156301 s, 67.1 MB/s 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.820 11:00:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.079 11:00:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.079 11:00:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.079 11:00:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.079 11:00:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.079 11:00:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.079 11:00:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.080 11:00:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.080 11:00:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.080 11:00:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.080 11:00:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.080 11:00:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.080 11:00:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.080 11:00:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.080 11:00:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.080 11:00:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.080 11:00:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.080 11:00:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.080 11:00:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.080 11:00:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.080 11:00:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.080 11:00:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.339 11:00:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.339 11:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.339 11:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.339 11:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.339 11:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.339 11:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.339 11:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.339 11:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.339 11:00:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.339 11:00:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.339 11:00:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.339 11:00:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.339 11:00:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.598 11:00:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:36.856 [2024-10-06 11:00:34.235847] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.856 [2024-10-06 11:00:34.271245] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.856 [2024-10-06 11:00:34.271248] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.856 [2024-10-06 11:00:34.311418] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:36.856 [2024-10-06 11:00:34.311460] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.145 11:00:37 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1852725 /var/tmp/spdk-nbd.sock 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1852725 ']' 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:40.145 11:00:37 event.app_repeat -- event/event.sh@39 -- # killprocess 1852725 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1852725 ']' 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1852725 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1852725 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1852725' 00:05:40.145 killing process with pid 1852725 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1852725 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1852725 00:05:40.145 spdk_app_start is called in Round 0. 00:05:40.145 Shutdown signal received, stop current app iteration 00:05:40.145 Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 reinitialization... 00:05:40.145 spdk_app_start is called in Round 1. 00:05:40.145 Shutdown signal received, stop current app iteration 00:05:40.145 Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 reinitialization... 00:05:40.145 spdk_app_start is called in Round 2. 00:05:40.145 Shutdown signal received, stop current app iteration 00:05:40.145 Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 reinitialization... 00:05:40.145 spdk_app_start is called in Round 3. 00:05:40.145 Shutdown signal received, stop current app iteration 00:05:40.145 11:00:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:40.145 11:00:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:40.145 00:05:40.145 real 0m16.138s 00:05:40.145 user 0m35.315s 00:05:40.145 sys 0m2.517s 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.145 11:00:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.145 ************************************ 00:05:40.145 END TEST app_repeat 00:05:40.145 ************************************ 00:05:40.145 11:00:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:40.145 11:00:37 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:40.145 11:00:37 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.145 11:00:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.145 11:00:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.145 ************************************ 00:05:40.145 START TEST cpu_locks 00:05:40.145 ************************************ 00:05:40.145 11:00:37 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:40.145 * Looking for test storage... 00:05:40.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:40.145 11:00:37 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:40.145 11:00:37 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:40.145 11:00:37 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:40.145 11:00:37 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.145 11:00:37 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:40.145 11:00:37 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.145 11:00:37 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:40.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.145 --rc genhtml_branch_coverage=1 00:05:40.145 --rc genhtml_function_coverage=1 00:05:40.145 --rc genhtml_legend=1 00:05:40.145 --rc geninfo_all_blocks=1 00:05:40.145 --rc geninfo_unexecuted_blocks=1 00:05:40.145 00:05:40.145 ' 00:05:40.145 11:00:37 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:40.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.145 --rc genhtml_branch_coverage=1 00:05:40.145 --rc genhtml_function_coverage=1 00:05:40.145 --rc genhtml_legend=1 00:05:40.145 --rc geninfo_all_blocks=1 00:05:40.145 --rc geninfo_unexecuted_blocks=1 00:05:40.145 00:05:40.145 ' 00:05:40.145 11:00:37 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:40.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.145 --rc genhtml_branch_coverage=1 00:05:40.145 --rc genhtml_function_coverage=1 00:05:40.145 --rc genhtml_legend=1 00:05:40.145 --rc geninfo_all_blocks=1 00:05:40.145 --rc geninfo_unexecuted_blocks=1 00:05:40.145 00:05:40.145 ' 00:05:40.146 11:00:37 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:40.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.146 --rc genhtml_branch_coverage=1 00:05:40.146 --rc genhtml_function_coverage=1 00:05:40.146 --rc genhtml_legend=1 00:05:40.146 --rc geninfo_all_blocks=1 00:05:40.146 --rc geninfo_unexecuted_blocks=1 00:05:40.146 00:05:40.146 ' 00:05:40.146 11:00:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:40.146 11:00:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:40.146 11:00:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:40.146 11:00:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:40.146 11:00:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.146 11:00:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.146 11:00:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.405 ************************************ 00:05:40.406 START TEST default_locks 00:05:40.406 ************************************ 00:05:40.406 11:00:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:40.406 11:00:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1855690 00:05:40.406 11:00:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.406 11:00:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1855690 00:05:40.406 11:00:37 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1855690 ']' 00:05:40.406 11:00:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.406 11:00:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.406 11:00:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.406 11:00:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.406 11:00:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.406 [2024-10-06 11:00:37.791043] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:40.406 [2024-10-06 11:00:37.791101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1855690 ] 00:05:40.406 [2024-10-06 11:00:37.848777] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.406 [2024-10-06 11:00:37.888088] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.665 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.665 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:40.665 11:00:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1855690 00:05:40.665 11:00:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1855690 00:05:40.665 11:00:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.234 lslocks: write error 00:05:41.234 11:00:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1855690 00:05:41.234 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1855690 ']' 00:05:41.234 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1855690 00:05:41.234 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:41.234 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.234 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1855690 00:05:41.234 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.234 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.234 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1855690' 00:05:41.234 killing process with pid 1855690 00:05:41.234 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1855690 00:05:41.234 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1855690 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1855690 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1855690 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1855690 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1855690 ']' 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1855690) - No such process 00:05:41.493 ERROR: process (pid: 1855690) is no longer running 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:41.493 00:05:41.493 real 0m1.212s 00:05:41.493 user 0m1.181s 00:05:41.493 sys 0m0.582s 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.493 11:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.493 ************************************ 00:05:41.493 END TEST default_locks 00:05:41.493 ************************************ 00:05:41.493 11:00:38 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:41.493 11:00:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.493 11:00:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.493 11:00:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.493 ************************************ 00:05:41.493 START TEST default_locks_via_rpc 00:05:41.493 ************************************ 00:05:41.493 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:41.493 11:00:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1855909 00:05:41.493 11:00:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.493 11:00:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1855909 00:05:41.493 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1855909 ']' 00:05:41.493 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.493 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.493 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.493 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.493 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.493 [2024-10-06 11:00:39.066736] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:41.493 [2024-10-06 11:00:39.066777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1855909 ] 00:05:41.752 [2024-10-06 11:00:39.124388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.752 [2024-10-06 11:00:39.163362] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1855909 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1855909 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1855909 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1855909 ']' 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1855909 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1855909 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1855909' 00:05:42.012 killing process with pid 1855909 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1855909 00:05:42.012 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1855909 00:05:42.581 00:05:42.581 real 0m0.844s 00:05:42.581 user 0m0.790s 00:05:42.581 sys 0m0.397s 00:05:42.581 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.581 11:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.581 ************************************ 00:05:42.581 END TEST default_locks_via_rpc 00:05:42.581 ************************************ 00:05:42.581 11:00:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:42.581 11:00:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.581 11:00:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.581 11:00:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.581 ************************************ 00:05:42.581 START TEST non_locking_app_on_locked_coremask 00:05:42.581 ************************************ 00:05:42.581 11:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:42.581 11:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1856144 00:05:42.581 11:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1856144 /var/tmp/spdk.sock 00:05:42.581 11:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1856144 ']' 00:05:42.581 11:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.581 11:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.581 11:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.581 11:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.581 11:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.581 11:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.581 [2024-10-06 11:00:39.974773] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:42.581 [2024-10-06 11:00:39.974816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856144 ] 00:05:42.581 [2024-10-06 11:00:40.031686] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.581 [2024-10-06 11:00:40.072783] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.841 11:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.841 11:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:42.841 11:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1856153 00:05:42.841 11:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1856153 /var/tmp/spdk2.sock 00:05:42.841 11:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1856153 ']' 00:05:42.841 11:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.841 11:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.841 11:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.841 11:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:42.841 11:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.841 11:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.841 [2024-10-06 11:00:40.319225] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:42.841 [2024-10-06 11:00:40.319273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856153 ] 00:05:42.841 [2024-10-06 11:00:40.395074] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.841 [2024-10-06 11:00:40.395105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.101 [2024-10-06 11:00:40.475500] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.669 11:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.669 11:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:43.669 11:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1856144 00:05:43.669 11:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1856144 00:05:43.669 11:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.928 lslocks: write error 00:05:43.928 11:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1856144 00:05:43.928 11:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1856144 ']' 00:05:43.928 11:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1856144 00:05:43.928 11:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:43.928 11:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.928 11:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1856144 00:05:44.187 11:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.187 11:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.187 11:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1856144' 00:05:44.187 killing process with pid 1856144 00:05:44.187 11:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1856144 00:05:44.187 11:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1856144 00:05:44.755 11:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1856153 00:05:44.755 11:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1856153 ']' 00:05:44.755 11:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1856153 00:05:44.755 11:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:44.755 11:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.755 11:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1856153 00:05:44.755 11:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.755 11:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.755 11:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1856153' 00:05:44.755 killing process with pid 1856153 00:05:44.755 11:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1856153 00:05:44.755 11:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1856153 00:05:45.014 00:05:45.014 real 0m2.602s 00:05:45.014 user 0m2.722s 00:05:45.014 sys 0m0.830s 00:05:45.014 11:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.014 11:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.014 ************************************ 00:05:45.014 END TEST non_locking_app_on_locked_coremask 00:05:45.014 ************************************ 00:05:45.014 11:00:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:45.014 11:00:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.014 11:00:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.014 11:00:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.014 ************************************ 00:05:45.014 START TEST locking_app_on_unlocked_coremask 00:05:45.014 ************************************ 00:05:45.014 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:45.014 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1856627 00:05:45.014 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1856627 /var/tmp/spdk.sock 00:05:45.014 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1856627 ']' 00:05:45.014 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.014 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.014 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.014 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:45.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.014 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.274 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.274 [2024-10-06 11:00:42.640753] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:45.274 [2024-10-06 11:00:42.640797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856627 ] 00:05:45.274 [2024-10-06 11:00:42.694733] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.274 [2024-10-06 11:00:42.694760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.274 [2024-10-06 11:00:42.734651] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.533 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.533 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:45.533 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1856640 00:05:45.533 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:45.533 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1856640 /var/tmp/spdk2.sock 00:05:45.533 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1856640 ']' 00:05:45.533 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.533 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.533 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.533 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.533 11:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.533 [2024-10-06 11:00:42.953318] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:45.533 [2024-10-06 11:00:42.953365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856640 ] 00:05:45.533 [2024-10-06 11:00:43.021452] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.533 [2024-10-06 11:00:43.100313] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.471 11:00:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.471 11:00:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:46.471 11:00:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1856640 00:05:46.471 11:00:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1856640 00:05:46.471 11:00:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.039 lslocks: write error 00:05:47.039 11:00:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1856627 00:05:47.039 11:00:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1856627 ']' 00:05:47.039 11:00:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1856627 00:05:47.039 11:00:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:47.039 11:00:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.039 11:00:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1856627 00:05:47.039 11:00:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.039 11:00:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.039 11:00:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1856627' 00:05:47.039 killing process with pid 1856627 00:05:47.039 11:00:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1856627 00:05:47.039 11:00:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1856627 00:05:47.609 11:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1856640 00:05:47.609 11:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1856640 ']' 00:05:47.609 11:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1856640 00:05:47.609 11:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:47.609 11:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.609 11:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1856640 00:05:47.868 11:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.868 11:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.868 11:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1856640' 00:05:47.868 killing process with pid 1856640 00:05:47.868 11:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1856640 00:05:47.868 11:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1856640 00:05:48.126 00:05:48.126 real 0m2.943s 00:05:48.126 user 0m3.070s 00:05:48.126 sys 0m0.995s 00:05:48.126 11:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.126 11:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.126 ************************************ 00:05:48.126 END TEST locking_app_on_unlocked_coremask 00:05:48.126 ************************************ 00:05:48.126 11:00:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:48.126 11:00:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.126 11:00:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.126 11:00:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.126 ************************************ 00:05:48.126 START TEST locking_app_on_locked_coremask 00:05:48.126 ************************************ 00:05:48.126 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:48.126 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1857116 00:05:48.126 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1857116 /var/tmp/spdk.sock 00:05:48.126 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.126 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1857116 ']' 00:05:48.126 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.126 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.126 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.126 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.126 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.126 [2024-10-06 11:00:45.652131] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:48.126 [2024-10-06 11:00:45.652174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857116 ] 00:05:48.385 [2024-10-06 11:00:45.708044] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.385 [2024-10-06 11:00:45.744023] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.385 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.385 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:48.385 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1857245 00:05:48.385 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1857245 /var/tmp/spdk2.sock 00:05:48.385 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:48.385 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:48.385 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1857245 /var/tmp/spdk2.sock 00:05:48.385 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:48.385 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.386 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:48.386 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.386 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1857245 /var/tmp/spdk2.sock 00:05:48.386 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1857245 ']' 00:05:48.386 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.386 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.386 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.386 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.386 11:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.645 [2024-10-06 11:00:45.988039] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:48.645 [2024-10-06 11:00:45.988093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857245 ] 00:05:48.645 [2024-10-06 11:00:46.063805] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1857116 has claimed it. 00:05:48.645 [2024-10-06 11:00:46.063846] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:49.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1857245) - No such process 00:05:49.213 ERROR: process (pid: 1857245) is no longer running 00:05:49.213 11:00:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.213 11:00:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:49.213 11:00:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:49.213 11:00:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:49.213 11:00:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:49.213 11:00:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:49.213 11:00:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1857116 00:05:49.213 11:00:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1857116 00:05:49.213 11:00:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.781 lslocks: write error 00:05:49.781 11:00:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1857116 00:05:49.781 11:00:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1857116 ']' 00:05:49.781 11:00:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1857116 00:05:49.781 11:00:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:49.781 11:00:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.781 11:00:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1857116 00:05:49.781 11:00:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.781 11:00:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.781 11:00:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1857116' 00:05:49.781 killing process with pid 1857116 00:05:49.781 11:00:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1857116 00:05:49.781 11:00:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1857116 00:05:50.040 00:05:50.040 real 0m1.869s 00:05:50.040 user 0m2.005s 00:05:50.040 sys 0m0.640s 00:05:50.040 11:00:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.040 11:00:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.040 ************************************ 00:05:50.040 END TEST locking_app_on_locked_coremask 00:05:50.040 ************************************ 00:05:50.040 11:00:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:50.040 11:00:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.040 11:00:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.040 11:00:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.040 ************************************ 00:05:50.040 START TEST locking_overlapped_coremask 00:05:50.040 ************************************ 00:05:50.040 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:50.040 11:00:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1857590 00:05:50.040 11:00:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1857590 /var/tmp/spdk.sock 00:05:50.040 11:00:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:50.040 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1857590 ']' 00:05:50.040 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.040 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.040 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.041 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.041 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.041 [2024-10-06 11:00:47.584843] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:50.041 [2024-10-06 11:00:47.584884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857590 ] 00:05:50.300 [2024-10-06 11:00:47.638857] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.300 [2024-10-06 11:00:47.675908] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.300 [2024-10-06 11:00:47.676006] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.300 [2024-10-06 11:00:47.676007] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.559 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.559 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:50.559 11:00:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1857601 00:05:50.559 11:00:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1857601 /var/tmp/spdk2.sock 00:05:50.559 11:00:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:50.559 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:50.559 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1857601 /var/tmp/spdk2.sock 00:05:50.559 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:50.559 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.559 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:50.559 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.559 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1857601 /var/tmp/spdk2.sock 00:05:50.559 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1857601 ']' 00:05:50.559 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.559 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.559 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.559 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.559 11:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.559 [2024-10-06 11:00:47.926617] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:50.560 [2024-10-06 11:00:47.926658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857601 ] 00:05:50.560 [2024-10-06 11:00:48.001062] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1857590 has claimed it. 00:05:50.560 [2024-10-06 11:00:48.001102] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:51.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1857601) - No such process 00:05:51.127 ERROR: process (pid: 1857601) is no longer running 00:05:51.127 11:00:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.127 11:00:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:51.127 11:00:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:51.127 11:00:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:51.127 11:00:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:51.127 11:00:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:51.127 11:00:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:51.127 11:00:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:51.127 11:00:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:51.128 11:00:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:51.128 11:00:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1857590 00:05:51.128 11:00:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1857590 ']' 00:05:51.128 11:00:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1857590 00:05:51.128 11:00:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:51.128 11:00:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.128 11:00:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1857590 00:05:51.128 11:00:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.128 11:00:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.128 11:00:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1857590' 00:05:51.128 killing process with pid 1857590 00:05:51.128 11:00:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1857590 00:05:51.128 11:00:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1857590 00:05:51.387 00:05:51.387 real 0m1.408s 00:05:51.387 user 0m3.882s 00:05:51.387 sys 0m0.377s 00:05:51.387 11:00:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.387 11:00:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.387 ************************************ 00:05:51.387 END TEST locking_overlapped_coremask 00:05:51.387 ************************************ 00:05:51.646 11:00:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:51.646 11:00:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.646 11:00:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.646 11:00:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.646 ************************************ 00:05:51.646 START TEST locking_overlapped_coremask_via_rpc 00:05:51.646 ************************************ 00:05:51.646 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:51.646 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1857851 00:05:51.646 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1857851 /var/tmp/spdk.sock 00:05:51.646 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:51.646 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1857851 ']' 00:05:51.646 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.646 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.646 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.646 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.646 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.646 [2024-10-06 11:00:49.065557] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:51.646 [2024-10-06 11:00:49.065601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857851 ] 00:05:51.646 [2024-10-06 11:00:49.121815] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.646 [2024-10-06 11:00:49.121842] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:51.646 [2024-10-06 11:00:49.163633] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.646 [2024-10-06 11:00:49.163732] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.646 [2024-10-06 11:00:49.163733] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.906 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.906 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:51.906 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1857861 00:05:51.906 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1857861 /var/tmp/spdk2.sock 00:05:51.906 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:51.906 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1857861 ']' 00:05:51.906 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.906 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.906 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.906 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.906 11:00:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.906 [2024-10-06 11:00:49.405779] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:51.906 [2024-10-06 11:00:49.405824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857861 ] 00:05:52.163 [2024-10-06 11:00:49.482984] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.163 [2024-10-06 11:00:49.483010] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.163 [2024-10-06 11:00:49.563730] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.163 [2024-10-06 11:00:49.567105] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.163 [2024-10-06 11:00:49.567105] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.731 [2024-10-06 11:00:50.260138] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1857851 has claimed it. 00:05:52.731 request: 00:05:52.731 { 00:05:52.731 "method": "framework_enable_cpumask_locks", 00:05:52.731 "req_id": 1 00:05:52.731 } 00:05:52.731 Got JSON-RPC error response 00:05:52.731 response: 00:05:52.731 { 00:05:52.731 "code": -32603, 00:05:52.731 "message": "Failed to claim CPU core: 2" 00:05:52.731 } 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1857851 /var/tmp/spdk.sock 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1857851 ']' 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.731 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.990 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.990 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:52.990 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1857861 /var/tmp/spdk2.sock 00:05:52.990 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1857861 ']' 00:05:52.990 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.990 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.990 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.990 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.990 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.249 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.249 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:53.249 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:53.249 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:53.249 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:53.249 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:53.249 00:05:53.249 real 0m1.645s 00:05:53.249 user 0m0.805s 00:05:53.249 sys 0m0.139s 00:05:53.249 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.249 11:00:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.249 ************************************ 00:05:53.249 END TEST locking_overlapped_coremask_via_rpc 00:05:53.249 ************************************ 00:05:53.249 11:00:50 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:53.249 11:00:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1857851 ]] 00:05:53.249 11:00:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1857851 00:05:53.249 11:00:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1857851 ']' 00:05:53.249 11:00:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1857851 00:05:53.249 11:00:50 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:53.249 11:00:50 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.249 11:00:50 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1857851 00:05:53.249 11:00:50 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.249 11:00:50 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.249 11:00:50 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1857851' 00:05:53.249 killing process with pid 1857851 00:05:53.249 11:00:50 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1857851 00:05:53.249 11:00:50 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1857851 00:05:53.508 11:00:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1857861 ]] 00:05:53.508 11:00:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1857861 00:05:53.508 11:00:51 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1857861 ']' 00:05:53.508 11:00:51 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1857861 00:05:53.508 11:00:51 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:53.508 11:00:51 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.508 11:00:51 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1857861 00:05:53.767 11:00:51 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:53.767 11:00:51 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:53.767 11:00:51 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1857861' 00:05:53.767 killing process with pid 1857861 00:05:53.767 11:00:51 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1857861 00:05:53.767 11:00:51 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1857861 00:05:54.027 11:00:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:54.027 11:00:51 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:54.027 11:00:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1857851 ]] 00:05:54.027 11:00:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1857851 00:05:54.027 11:00:51 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1857851 ']' 00:05:54.027 11:00:51 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1857851 00:05:54.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1857851) - No such process 00:05:54.027 11:00:51 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1857851 is not found' 00:05:54.027 Process with pid 1857851 is not found 00:05:54.027 11:00:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1857861 ]] 00:05:54.027 11:00:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1857861 00:05:54.027 11:00:51 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1857861 ']' 00:05:54.027 11:00:51 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1857861 00:05:54.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1857861) - No such process 00:05:54.027 11:00:51 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1857861 is not found' 00:05:54.027 Process with pid 1857861 is not found 00:05:54.027 11:00:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:54.027 00:05:54.027 real 0m13.920s 00:05:54.027 user 0m24.094s 00:05:54.027 sys 0m4.897s 00:05:54.027 11:00:51 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.027 11:00:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.027 ************************************ 00:05:54.027 END TEST cpu_locks 00:05:54.027 ************************************ 00:05:54.027 00:05:54.027 real 0m38.591s 00:05:54.027 user 1m13.778s 00:05:54.027 sys 0m8.340s 00:05:54.027 11:00:51 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.027 11:00:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.027 ************************************ 00:05:54.027 END TEST event 00:05:54.027 ************************************ 00:05:54.027 11:00:51 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:54.027 11:00:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.027 11:00:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.027 11:00:51 -- common/autotest_common.sh@10 -- # set +x 00:05:54.027 ************************************ 00:05:54.027 START TEST thread 00:05:54.027 ************************************ 00:05:54.027 11:00:51 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:54.286 * Looking for test storage... 00:05:54.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:54.286 11:00:51 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:54.287 11:00:51 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:05:54.287 11:00:51 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:54.287 11:00:51 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:54.287 11:00:51 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.287 11:00:51 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.287 11:00:51 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.287 11:00:51 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.287 11:00:51 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.287 11:00:51 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.287 11:00:51 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.287 11:00:51 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.287 11:00:51 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.287 11:00:51 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.287 11:00:51 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.287 11:00:51 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:54.287 11:00:51 thread -- scripts/common.sh@345 -- # : 1 00:05:54.287 11:00:51 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.287 11:00:51 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.287 11:00:51 thread -- scripts/common.sh@365 -- # decimal 1 00:05:54.287 11:00:51 thread -- scripts/common.sh@353 -- # local d=1 00:05:54.287 11:00:51 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.287 11:00:51 thread -- scripts/common.sh@355 -- # echo 1 00:05:54.287 11:00:51 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.287 11:00:51 thread -- scripts/common.sh@366 -- # decimal 2 00:05:54.287 11:00:51 thread -- scripts/common.sh@353 -- # local d=2 00:05:54.287 11:00:51 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.287 11:00:51 thread -- scripts/common.sh@355 -- # echo 2 00:05:54.287 11:00:51 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.287 11:00:51 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.287 11:00:51 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.287 11:00:51 thread -- scripts/common.sh@368 -- # return 0 00:05:54.287 11:00:51 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.287 11:00:51 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:54.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.287 --rc genhtml_branch_coverage=1 00:05:54.287 --rc genhtml_function_coverage=1 00:05:54.287 --rc genhtml_legend=1 00:05:54.287 --rc geninfo_all_blocks=1 00:05:54.287 --rc geninfo_unexecuted_blocks=1 00:05:54.287 00:05:54.287 ' 00:05:54.287 11:00:51 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:54.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.287 --rc genhtml_branch_coverage=1 00:05:54.287 --rc genhtml_function_coverage=1 00:05:54.287 --rc genhtml_legend=1 00:05:54.287 --rc geninfo_all_blocks=1 00:05:54.287 --rc geninfo_unexecuted_blocks=1 00:05:54.287 00:05:54.287 ' 00:05:54.287 11:00:51 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:54.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.287 --rc genhtml_branch_coverage=1 00:05:54.287 --rc genhtml_function_coverage=1 00:05:54.287 --rc genhtml_legend=1 00:05:54.287 --rc geninfo_all_blocks=1 00:05:54.287 --rc geninfo_unexecuted_blocks=1 00:05:54.287 00:05:54.287 ' 00:05:54.287 11:00:51 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:54.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.287 --rc genhtml_branch_coverage=1 00:05:54.287 --rc genhtml_function_coverage=1 00:05:54.287 --rc genhtml_legend=1 00:05:54.287 --rc geninfo_all_blocks=1 00:05:54.287 --rc geninfo_unexecuted_blocks=1 00:05:54.287 00:05:54.287 ' 00:05:54.287 11:00:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:54.287 11:00:51 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:54.287 11:00:51 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.287 11:00:51 thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.287 ************************************ 00:05:54.287 START TEST thread_poller_perf 00:05:54.287 ************************************ 00:05:54.287 11:00:51 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:54.287 [2024-10-06 11:00:51.771236] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:54.287 [2024-10-06 11:00:51.771304] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1858406 ] 00:05:54.287 [2024-10-06 11:00:51.830307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.546 [2024-10-06 11:00:51.869095] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.546 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:55.483 ====================================== 00:05:55.483 busy:2107500632 (cyc) 00:05:55.483 total_run_count: 427000 00:05:55.483 tsc_hz: 2100000000 (cyc) 00:05:55.483 ====================================== 00:05:55.483 poller_cost: 4935 (cyc), 2350 (nsec) 00:05:55.483 00:05:55.483 real 0m1.182s 00:05:55.483 user 0m1.096s 00:05:55.483 sys 0m0.082s 00:05:55.483 11:00:52 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.483 11:00:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.483 ************************************ 00:05:55.483 END TEST thread_poller_perf 00:05:55.483 ************************************ 00:05:55.483 11:00:52 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:55.483 11:00:52 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:55.483 11:00:52 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.483 11:00:52 thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.483 ************************************ 00:05:55.483 START TEST thread_poller_perf 00:05:55.483 ************************************ 00:05:55.483 11:00:52 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:55.483 [2024-10-06 11:00:53.010214] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:55.483 [2024-10-06 11:00:53.010276] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1858659 ] 00:05:55.742 [2024-10-06 11:00:53.067273] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.742 [2024-10-06 11:00:53.104364] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.742 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:56.679 ====================================== 00:05:56.679 busy:2101495206 (cyc) 00:05:56.679 total_run_count: 5634000 00:05:56.679 tsc_hz: 2100000000 (cyc) 00:05:56.679 ====================================== 00:05:56.679 poller_cost: 373 (cyc), 177 (nsec) 00:05:56.679 00:05:56.679 real 0m1.174s 00:05:56.679 user 0m1.097s 00:05:56.679 sys 0m0.074s 00:05:56.679 11:00:54 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.679 11:00:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:56.679 ************************************ 00:05:56.679 END TEST thread_poller_perf 00:05:56.679 ************************************ 00:05:56.679 11:00:54 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:56.679 00:05:56.679 real 0m2.641s 00:05:56.679 user 0m2.340s 00:05:56.679 sys 0m0.311s 00:05:56.679 11:00:54 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.679 11:00:54 thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.679 ************************************ 00:05:56.679 END TEST thread 00:05:56.679 ************************************ 00:05:56.679 11:00:54 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:56.679 11:00:54 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:56.679 11:00:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.679 11:00:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.679 11:00:54 -- common/autotest_common.sh@10 -- # set +x 00:05:56.938 ************************************ 00:05:56.938 START TEST app_cmdline 00:05:56.938 ************************************ 00:05:56.938 11:00:54 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:56.938 * Looking for test storage... 00:05:56.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:56.938 11:00:54 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:56.938 11:00:54 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:56.938 11:00:54 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:05:56.938 11:00:54 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.938 11:00:54 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:56.938 11:00:54 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.939 11:00:54 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:56.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.939 --rc genhtml_branch_coverage=1 00:05:56.939 --rc genhtml_function_coverage=1 00:05:56.939 --rc genhtml_legend=1 00:05:56.939 --rc geninfo_all_blocks=1 00:05:56.939 --rc geninfo_unexecuted_blocks=1 00:05:56.939 00:05:56.939 ' 00:05:56.939 11:00:54 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:56.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.939 --rc genhtml_branch_coverage=1 00:05:56.939 --rc genhtml_function_coverage=1 00:05:56.939 --rc genhtml_legend=1 00:05:56.939 --rc geninfo_all_blocks=1 00:05:56.939 --rc geninfo_unexecuted_blocks=1 00:05:56.939 00:05:56.939 ' 00:05:56.939 11:00:54 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:56.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.939 --rc genhtml_branch_coverage=1 00:05:56.939 --rc genhtml_function_coverage=1 00:05:56.939 --rc genhtml_legend=1 00:05:56.939 --rc geninfo_all_blocks=1 00:05:56.939 --rc geninfo_unexecuted_blocks=1 00:05:56.939 00:05:56.939 ' 00:05:56.939 11:00:54 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:56.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.939 --rc genhtml_branch_coverage=1 00:05:56.939 --rc genhtml_function_coverage=1 00:05:56.939 --rc genhtml_legend=1 00:05:56.939 --rc geninfo_all_blocks=1 00:05:56.939 --rc geninfo_unexecuted_blocks=1 00:05:56.939 00:05:56.939 ' 00:05:56.939 11:00:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:56.939 11:00:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1858947 00:05:56.939 11:00:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1858947 00:05:56.939 11:00:54 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:56.939 11:00:54 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1858947 ']' 00:05:56.939 11:00:54 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.939 11:00:54 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.939 11:00:54 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.939 11:00:54 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.939 11:00:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:56.939 [2024-10-06 11:00:54.470369] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:05:56.939 [2024-10-06 11:00:54.470415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1858947 ] 00:05:57.197 [2024-10-06 11:00:54.524764] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.197 [2024-10-06 11:00:54.562762] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.197 11:00:54 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.197 11:00:54 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:57.197 11:00:54 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:57.456 { 00:05:57.456 "version": "SPDK v25.01-pre git sha1 3950cd1bb", 00:05:57.456 "fields": { 00:05:57.456 "major": 25, 00:05:57.456 "minor": 1, 00:05:57.456 "patch": 0, 00:05:57.456 "suffix": "-pre", 00:05:57.456 "commit": "3950cd1bb" 00:05:57.456 } 00:05:57.456 } 00:05:57.456 11:00:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:57.456 11:00:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:57.456 11:00:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:57.456 11:00:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:57.456 11:00:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:57.456 11:00:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:57.456 11:00:54 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.456 11:00:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:57.456 11:00:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:57.456 11:00:54 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.456 11:00:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:57.456 11:00:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:57.456 11:00:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.456 11:00:54 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:57.456 11:00:54 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.456 11:00:54 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.456 11:00:54 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.456 11:00:54 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.456 11:00:54 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.456 11:00:54 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.456 11:00:54 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.456 11:00:54 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.456 11:00:54 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:57.456 11:00:54 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.716 request: 00:05:57.716 { 00:05:57.716 "method": "env_dpdk_get_mem_stats", 00:05:57.716 "req_id": 1 00:05:57.716 } 00:05:57.716 Got JSON-RPC error response 00:05:57.716 response: 00:05:57.716 { 00:05:57.716 "code": -32601, 00:05:57.716 "message": "Method not found" 00:05:57.716 } 00:05:57.716 11:00:55 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:57.716 11:00:55 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.716 11:00:55 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.716 11:00:55 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.716 11:00:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1858947 00:05:57.716 11:00:55 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1858947 ']' 00:05:57.716 11:00:55 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1858947 00:05:57.716 11:00:55 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:57.716 11:00:55 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.716 11:00:55 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1858947 00:05:57.716 11:00:55 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.716 11:00:55 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.716 11:00:55 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1858947' 00:05:57.716 killing process with pid 1858947 00:05:57.716 11:00:55 app_cmdline -- common/autotest_common.sh@969 -- # kill 1858947 00:05:57.716 11:00:55 app_cmdline -- common/autotest_common.sh@974 -- # wait 1858947 00:05:57.975 00:05:57.975 real 0m1.261s 00:05:57.975 user 0m1.468s 00:05:57.975 sys 0m0.418s 00:05:57.975 11:00:55 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.975 11:00:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:57.975 ************************************ 00:05:57.975 END TEST app_cmdline 00:05:57.975 ************************************ 00:05:57.975 11:00:55 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:57.975 11:00:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.235 11:00:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.235 11:00:55 -- common/autotest_common.sh@10 -- # set +x 00:05:58.235 ************************************ 00:05:58.235 START TEST version 00:05:58.235 ************************************ 00:05:58.235 11:00:55 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:58.235 * Looking for test storage... 00:05:58.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:58.235 11:00:55 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:58.235 11:00:55 version -- common/autotest_common.sh@1681 -- # lcov --version 00:05:58.235 11:00:55 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:58.235 11:00:55 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:58.235 11:00:55 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.235 11:00:55 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.235 11:00:55 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.235 11:00:55 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.235 11:00:55 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.235 11:00:55 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.235 11:00:55 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.235 11:00:55 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.235 11:00:55 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.235 11:00:55 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.235 11:00:55 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.235 11:00:55 version -- scripts/common.sh@344 -- # case "$op" in 00:05:58.235 11:00:55 version -- scripts/common.sh@345 -- # : 1 00:05:58.235 11:00:55 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.235 11:00:55 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.235 11:00:55 version -- scripts/common.sh@365 -- # decimal 1 00:05:58.235 11:00:55 version -- scripts/common.sh@353 -- # local d=1 00:05:58.235 11:00:55 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.235 11:00:55 version -- scripts/common.sh@355 -- # echo 1 00:05:58.235 11:00:55 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.235 11:00:55 version -- scripts/common.sh@366 -- # decimal 2 00:05:58.235 11:00:55 version -- scripts/common.sh@353 -- # local d=2 00:05:58.235 11:00:55 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.235 11:00:55 version -- scripts/common.sh@355 -- # echo 2 00:05:58.235 11:00:55 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.235 11:00:55 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.235 11:00:55 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.235 11:00:55 version -- scripts/common.sh@368 -- # return 0 00:05:58.235 11:00:55 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.235 11:00:55 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:58.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.235 --rc genhtml_branch_coverage=1 00:05:58.235 --rc genhtml_function_coverage=1 00:05:58.235 --rc genhtml_legend=1 00:05:58.236 --rc geninfo_all_blocks=1 00:05:58.236 --rc geninfo_unexecuted_blocks=1 00:05:58.236 00:05:58.236 ' 00:05:58.236 11:00:55 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:58.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.236 --rc genhtml_branch_coverage=1 00:05:58.236 --rc genhtml_function_coverage=1 00:05:58.236 --rc genhtml_legend=1 00:05:58.236 --rc geninfo_all_blocks=1 00:05:58.236 --rc geninfo_unexecuted_blocks=1 00:05:58.236 00:05:58.236 ' 00:05:58.236 11:00:55 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:58.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.236 --rc genhtml_branch_coverage=1 00:05:58.236 --rc genhtml_function_coverage=1 00:05:58.236 --rc genhtml_legend=1 00:05:58.236 --rc geninfo_all_blocks=1 00:05:58.236 --rc geninfo_unexecuted_blocks=1 00:05:58.236 00:05:58.236 ' 00:05:58.236 11:00:55 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:58.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.236 --rc genhtml_branch_coverage=1 00:05:58.236 --rc genhtml_function_coverage=1 00:05:58.236 --rc genhtml_legend=1 00:05:58.236 --rc geninfo_all_blocks=1 00:05:58.236 --rc geninfo_unexecuted_blocks=1 00:05:58.236 00:05:58.236 ' 00:05:58.236 11:00:55 version -- app/version.sh@17 -- # get_header_version major 00:05:58.236 11:00:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:58.236 11:00:55 version -- app/version.sh@14 -- # cut -f2 00:05:58.236 11:00:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.236 11:00:55 version -- app/version.sh@17 -- # major=25 00:05:58.236 11:00:55 version -- app/version.sh@18 -- # get_header_version minor 00:05:58.236 11:00:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:58.236 11:00:55 version -- app/version.sh@14 -- # cut -f2 00:05:58.236 11:00:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.236 11:00:55 version -- app/version.sh@18 -- # minor=1 00:05:58.236 11:00:55 version -- app/version.sh@19 -- # get_header_version patch 00:05:58.236 11:00:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:58.236 11:00:55 version -- app/version.sh@14 -- # cut -f2 00:05:58.236 11:00:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.236 11:00:55 version -- app/version.sh@19 -- # patch=0 00:05:58.236 11:00:55 version -- app/version.sh@20 -- # get_header_version suffix 00:05:58.236 11:00:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:58.236 11:00:55 version -- app/version.sh@14 -- # cut -f2 00:05:58.236 11:00:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.236 11:00:55 version -- app/version.sh@20 -- # suffix=-pre 00:05:58.236 11:00:55 version -- app/version.sh@22 -- # version=25.1 00:05:58.236 11:00:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:58.236 11:00:55 version -- app/version.sh@28 -- # version=25.1rc0 00:05:58.236 11:00:55 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:58.236 11:00:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:58.236 11:00:55 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:58.236 11:00:55 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:58.236 00:05:58.236 real 0m0.216s 00:05:58.236 user 0m0.147s 00:05:58.236 sys 0m0.113s 00:05:58.236 11:00:55 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.236 11:00:55 version -- common/autotest_common.sh@10 -- # set +x 00:05:58.236 ************************************ 00:05:58.236 END TEST version 00:05:58.236 ************************************ 00:05:58.495 11:00:55 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:58.495 11:00:55 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:58.495 11:00:55 -- spdk/autotest.sh@194 -- # uname -s 00:05:58.495 11:00:55 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:58.495 11:00:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:58.495 11:00:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:58.495 11:00:55 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:58.495 11:00:55 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:58.495 11:00:55 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:58.495 11:00:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:58.495 11:00:55 -- common/autotest_common.sh@10 -- # set +x 00:05:58.495 11:00:55 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:58.495 11:00:55 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:58.495 11:00:55 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:58.495 11:00:55 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:58.495 11:00:55 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:58.495 11:00:55 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:58.495 11:00:55 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:58.495 11:00:55 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:58.495 11:00:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.495 11:00:55 -- common/autotest_common.sh@10 -- # set +x 00:05:58.495 ************************************ 00:05:58.495 START TEST nvmf_tcp 00:05:58.495 ************************************ 00:05:58.495 11:00:55 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:58.495 * Looking for test storage... 00:05:58.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:58.495 11:00:55 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:58.495 11:00:55 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:58.495 11:00:55 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:58.495 11:00:56 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:58.495 11:00:56 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.495 11:00:56 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.495 11:00:56 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.495 11:00:56 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.495 11:00:56 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.496 11:00:56 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.496 11:00:56 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.496 11:00:56 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.496 11:00:56 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.496 11:00:56 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.496 11:00:56 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.496 11:00:56 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:58.496 11:00:56 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:58.496 11:00:56 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.496 11:00:56 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.496 11:00:56 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:58.756 11:00:56 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:58.756 11:00:56 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.756 11:00:56 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:58.756 11:00:56 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.756 11:00:56 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:58.756 11:00:56 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:58.756 11:00:56 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.756 11:00:56 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:58.756 11:00:56 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.756 11:00:56 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.756 11:00:56 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.756 11:00:56 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:58.756 11:00:56 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.756 11:00:56 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:58.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.756 --rc genhtml_branch_coverage=1 00:05:58.756 --rc genhtml_function_coverage=1 00:05:58.756 --rc genhtml_legend=1 00:05:58.756 --rc geninfo_all_blocks=1 00:05:58.756 --rc geninfo_unexecuted_blocks=1 00:05:58.756 00:05:58.756 ' 00:05:58.756 11:00:56 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:58.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.756 --rc genhtml_branch_coverage=1 00:05:58.756 --rc genhtml_function_coverage=1 00:05:58.756 --rc genhtml_legend=1 00:05:58.756 --rc geninfo_all_blocks=1 00:05:58.756 --rc geninfo_unexecuted_blocks=1 00:05:58.756 00:05:58.756 ' 00:05:58.756 11:00:56 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:58.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.756 --rc genhtml_branch_coverage=1 00:05:58.756 --rc genhtml_function_coverage=1 00:05:58.756 --rc genhtml_legend=1 00:05:58.756 --rc geninfo_all_blocks=1 00:05:58.756 --rc geninfo_unexecuted_blocks=1 00:05:58.756 00:05:58.756 ' 00:05:58.756 11:00:56 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:58.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.756 --rc genhtml_branch_coverage=1 00:05:58.756 --rc genhtml_function_coverage=1 00:05:58.756 --rc genhtml_legend=1 00:05:58.756 --rc geninfo_all_blocks=1 00:05:58.756 --rc geninfo_unexecuted_blocks=1 00:05:58.756 00:05:58.756 ' 00:05:58.756 11:00:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:58.756 11:00:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:58.756 11:00:56 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:58.756 11:00:56 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:58.756 11:00:56 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.756 11:00:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.756 ************************************ 00:05:58.756 START TEST nvmf_target_core 00:05:58.756 ************************************ 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:58.756 * Looking for test storage... 00:05:58.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:58.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.756 --rc genhtml_branch_coverage=1 00:05:58.756 --rc genhtml_function_coverage=1 00:05:58.756 --rc genhtml_legend=1 00:05:58.756 --rc geninfo_all_blocks=1 00:05:58.756 --rc geninfo_unexecuted_blocks=1 00:05:58.756 00:05:58.756 ' 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:58.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.756 --rc genhtml_branch_coverage=1 00:05:58.756 --rc genhtml_function_coverage=1 00:05:58.756 --rc genhtml_legend=1 00:05:58.756 --rc geninfo_all_blocks=1 00:05:58.756 --rc geninfo_unexecuted_blocks=1 00:05:58.756 00:05:58.756 ' 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:58.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.756 --rc genhtml_branch_coverage=1 00:05:58.756 --rc genhtml_function_coverage=1 00:05:58.756 --rc genhtml_legend=1 00:05:58.756 --rc geninfo_all_blocks=1 00:05:58.756 --rc geninfo_unexecuted_blocks=1 00:05:58.756 00:05:58.756 ' 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:58.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.756 --rc genhtml_branch_coverage=1 00:05:58.756 --rc genhtml_function_coverage=1 00:05:58.756 --rc genhtml_legend=1 00:05:58.756 --rc geninfo_all_blocks=1 00:05:58.756 --rc geninfo_unexecuted_blocks=1 00:05:58.756 00:05:58.756 ' 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:58.756 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:58.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.757 11:00:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:59.017 ************************************ 00:05:59.017 START TEST nvmf_abort 00:05:59.017 ************************************ 00:05:59.017 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:59.017 * Looking for test storage... 00:05:59.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.017 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:59.017 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:05:59.017 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:59.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.018 --rc genhtml_branch_coverage=1 00:05:59.018 --rc genhtml_function_coverage=1 00:05:59.018 --rc genhtml_legend=1 00:05:59.018 --rc geninfo_all_blocks=1 00:05:59.018 --rc geninfo_unexecuted_blocks=1 00:05:59.018 00:05:59.018 ' 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:59.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.018 --rc genhtml_branch_coverage=1 00:05:59.018 --rc genhtml_function_coverage=1 00:05:59.018 --rc genhtml_legend=1 00:05:59.018 --rc geninfo_all_blocks=1 00:05:59.018 --rc geninfo_unexecuted_blocks=1 00:05:59.018 00:05:59.018 ' 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:59.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.018 --rc genhtml_branch_coverage=1 00:05:59.018 --rc genhtml_function_coverage=1 00:05:59.018 --rc genhtml_legend=1 00:05:59.018 --rc geninfo_all_blocks=1 00:05:59.018 --rc geninfo_unexecuted_blocks=1 00:05:59.018 00:05:59.018 ' 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:59.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.018 --rc genhtml_branch_coverage=1 00:05:59.018 --rc genhtml_function_coverage=1 00:05:59.018 --rc genhtml_legend=1 00:05:59.018 --rc geninfo_all_blocks=1 00:05:59.018 --rc geninfo_unexecuted_blocks=1 00:05:59.018 00:05:59.018 ' 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:59.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:59.018 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:59.019 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:59.019 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:59.019 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:59.019 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:59.019 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:59.019 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:59.019 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.299 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:04.299 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:04.299 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:04.299 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:04.299 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:04.299 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:04.299 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:04.299 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:04.299 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:04.299 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:04.299 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:04.300 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:04.300 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:04.300 Found net devices under 0000:af:00.0: cvl_0_0 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:04.300 Found net devices under 0000:af:00.1: cvl_0_1 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:04.300 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:04.559 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:04.559 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:04.559 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:04.559 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:04.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:04.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:06:04.560 00:06:04.560 --- 10.0.0.2 ping statistics --- 00:06:04.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.560 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:06:04.560 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:04.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:04.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:06:04.560 00:06:04.560 --- 10.0.0.1 ping statistics --- 00:06:04.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.560 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:06:04.560 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:04.560 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:06:04.560 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:04.560 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:04.560 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:04.560 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:04.560 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:04.560 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:04.560 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:04.560 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:04.560 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:04.560 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:04.560 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.560 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1862405 00:06:04.560 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:04.560 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1862405 00:06:04.560 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1862405 ']' 00:06:04.560 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.560 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.560 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.560 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.560 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.560 [2024-10-06 11:01:02.080032] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:06:04.560 [2024-10-06 11:01:02.080082] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:04.823 [2024-10-06 11:01:02.140498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.823 [2024-10-06 11:01:02.181289] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:04.823 [2024-10-06 11:01:02.181330] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:04.823 [2024-10-06 11:01:02.181338] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:04.823 [2024-10-06 11:01:02.181345] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:04.823 [2024-10-06 11:01:02.181351] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:04.823 [2024-10-06 11:01:02.182152] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.823 [2024-10-06 11:01:02.182174] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.823 [2024-10-06 11:01:02.182173] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.823 [2024-10-06 11:01:02.316680] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.823 Malloc0 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.823 Delay0 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.823 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:05.145 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.145 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:05.145 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.145 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:05.145 [2024-10-06 11:01:02.401787] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:05.145 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.145 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:05.145 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.145 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:05.145 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.146 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:05.146 [2024-10-06 11:01:02.518834] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:07.720 Initializing NVMe Controllers 00:06:07.720 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:07.720 controller IO queue size 128 less than required 00:06:07.720 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:07.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:07.720 Initialization complete. Launching workers. 00:06:07.720 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37207 00:06:07.720 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37272, failed to submit 62 00:06:07.720 success 37211, unsuccessful 61, failed 0 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:07.720 rmmod nvme_tcp 00:06:07.720 rmmod nvme_fabrics 00:06:07.720 rmmod nvme_keyring 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1862405 ']' 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1862405 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1862405 ']' 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1862405 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1862405 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1862405' 00:06:07.720 killing process with pid 1862405 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1862405 00:06:07.720 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1862405 00:06:07.720 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:07.720 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:07.720 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:07.720 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:07.720 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:06:07.720 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:07.721 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:06:07.721 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:07.721 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:07.721 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.721 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:07.721 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.631 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:09.631 00:06:09.631 real 0m10.757s 00:06:09.631 user 0m11.527s 00:06:09.631 sys 0m5.194s 00:06:09.631 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.631 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:09.631 ************************************ 00:06:09.631 END TEST nvmf_abort 00:06:09.631 ************************************ 00:06:09.631 11:01:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:09.631 11:01:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:09.631 11:01:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.631 11:01:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:09.631 ************************************ 00:06:09.631 START TEST nvmf_ns_hotplug_stress 00:06:09.631 ************************************ 00:06:09.631 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:09.892 * Looking for test storage... 00:06:09.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:09.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.892 --rc genhtml_branch_coverage=1 00:06:09.892 --rc genhtml_function_coverage=1 00:06:09.892 --rc genhtml_legend=1 00:06:09.892 --rc geninfo_all_blocks=1 00:06:09.892 --rc geninfo_unexecuted_blocks=1 00:06:09.892 00:06:09.892 ' 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:09.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.892 --rc genhtml_branch_coverage=1 00:06:09.892 --rc genhtml_function_coverage=1 00:06:09.892 --rc genhtml_legend=1 00:06:09.892 --rc geninfo_all_blocks=1 00:06:09.892 --rc geninfo_unexecuted_blocks=1 00:06:09.892 00:06:09.892 ' 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:09.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.892 --rc genhtml_branch_coverage=1 00:06:09.892 --rc genhtml_function_coverage=1 00:06:09.892 --rc genhtml_legend=1 00:06:09.892 --rc geninfo_all_blocks=1 00:06:09.892 --rc geninfo_unexecuted_blocks=1 00:06:09.892 00:06:09.892 ' 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:09.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.892 --rc genhtml_branch_coverage=1 00:06:09.892 --rc genhtml_function_coverage=1 00:06:09.892 --rc genhtml_legend=1 00:06:09.892 --rc geninfo_all_blocks=1 00:06:09.892 --rc geninfo_unexecuted_blocks=1 00:06:09.892 00:06:09.892 ' 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:09.892 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:09.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:09.893 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:16.470 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:16.470 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:16.470 Found net devices under 0000:af:00.0: cvl_0_0 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:16.470 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:16.471 Found net devices under 0000:af:00.1: cvl_0_1 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:16.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:16.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:16.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:06:16.471 00:06:16.471 --- 10.0.0.2 ping statistics --- 00:06:16.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.471 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:16.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:16.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:06:16.471 00:06:16.471 --- 10.0.0.1 ping statistics --- 00:06:16.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.471 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1866496 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1866496 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1866496 ']' 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:16.471 [2024-10-06 11:01:13.163280] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:06:16.471 [2024-10-06 11:01:13.163322] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:16.471 [2024-10-06 11:01:13.222229] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.471 [2024-10-06 11:01:13.259459] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:16.471 [2024-10-06 11:01:13.259502] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:16.471 [2024-10-06 11:01:13.259509] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:16.471 [2024-10-06 11:01:13.259515] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:16.471 [2024-10-06 11:01:13.259519] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:16.471 [2024-10-06 11:01:13.260474] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.471 [2024-10-06 11:01:13.260563] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.471 [2024-10-06 11:01:13.260565] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:16.471 [2024-10-06 11:01:13.558704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:16.471 [2024-10-06 11:01:13.962716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:16.471 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:16.731 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:16.990 Malloc0 00:06:16.990 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:17.250 Delay0 00:06:17.250 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.250 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:17.509 NULL1 00:06:17.509 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:17.768 11:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:17.768 11:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1866773 00:06:17.768 11:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:17.768 11:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.148 Read completed with error (sct=0, sc=11) 00:06:19.148 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.148 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:19.148 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:19.408 true 00:06:19.408 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:19.408 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.352 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.352 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:20.352 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:20.352 true 00:06:20.611 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:20.611 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.611 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.873 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:20.873 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:21.134 true 00:06:21.134 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:21.134 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.513 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.513 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:22.513 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:22.513 true 00:06:22.772 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:22.772 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.772 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.030 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:23.031 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:23.289 true 00:06:23.289 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:23.289 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.668 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.668 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:24.668 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:24.926 true 00:06:24.926 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:24.926 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.863 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.863 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:25.863 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:26.123 true 00:06:26.123 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:26.123 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.383 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.383 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:26.383 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:26.642 true 00:06:26.642 11:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:26.642 11:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.579 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.838 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:27.838 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:28.097 true 00:06:28.097 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:28.097 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.357 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.616 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:28.616 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:28.616 true 00:06:28.616 11:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:28.616 11:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.995 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.995 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:29.995 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:30.254 true 00:06:30.254 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:30.254 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.192 11:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.192 11:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:31.192 11:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:31.452 true 00:06:31.452 11:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:31.452 11:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.452 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.711 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:31.711 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:31.970 true 00:06:31.970 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:31.970 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.907 11:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.166 11:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:33.167 11:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:33.425 true 00:06:33.425 11:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:33.425 11:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.361 11:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.361 11:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:34.362 11:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:34.620 true 00:06:34.620 11:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:34.620 11:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.879 11:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.138 11:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:35.138 11:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:35.138 true 00:06:35.398 11:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:35.398 11:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.336 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.594 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:36.594 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:36.852 true 00:06:36.852 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:36.852 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.786 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.786 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:37.786 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:38.044 true 00:06:38.044 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:38.044 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.302 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.302 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:38.302 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:38.560 true 00:06:38.560 11:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:38.560 11:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.938 11:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.938 11:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:39.938 11:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:40.197 true 00:06:40.197 11:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:40.197 11:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.133 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.133 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:41.133 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:41.390 true 00:06:41.390 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:41.390 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.649 11:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.649 11:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:41.649 11:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:41.941 true 00:06:41.941 11:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:41.941 11:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.314 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.314 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:43.314 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:43.572 true 00:06:43.572 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:43.572 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.507 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.507 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:44.507 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:44.765 true 00:06:44.765 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:44.765 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.022 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.022 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:45.022 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:45.280 true 00:06:45.280 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:45.280 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.656 11:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.656 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:46.656 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:46.914 true 00:06:46.914 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:46.914 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.850 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.850 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:47.850 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:47.850 Initializing NVMe Controllers 00:06:47.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:47.850 Controller IO queue size 128, less than required. 00:06:47.850 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:47.850 Controller IO queue size 128, less than required. 00:06:47.850 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:47.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:47.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:47.850 Initialization complete. Launching workers. 00:06:47.850 ======================================================== 00:06:47.850 Latency(us) 00:06:47.850 Device Information : IOPS MiB/s Average min max 00:06:47.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1971.95 0.96 44867.37 2975.06 1192143.46 00:06:47.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17816.88 8.70 7184.50 2096.87 297413.37 00:06:47.850 ======================================================== 00:06:47.850 Total : 19788.83 9.66 10939.58 2096.87 1192143.46 00:06:47.850 00:06:48.109 true 00:06:48.109 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1866773 00:06:48.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1866773) - No such process 00:06:48.109 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1866773 00:06:48.109 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.109 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:48.369 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:48.369 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:48.369 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:48.369 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.369 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:48.628 null0 00:06:48.628 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.628 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.628 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:48.887 null1 00:06:48.887 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.887 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.887 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:48.887 null2 00:06:49.146 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.146 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.146 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:49.146 null3 00:06:49.146 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.146 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.146 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:49.406 null4 00:06:49.406 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.406 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.406 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:49.666 null5 00:06:49.666 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.666 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.666 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:49.927 null6 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:49.927 null7 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:49.927 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.928 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.928 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:49.928 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.928 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1872225 1872226 1872227 1872228 1872230 1872232 1872234 1872236 00:06:49.928 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:49.928 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.928 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:49.928 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.928 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:49.928 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:49.928 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.928 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.928 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.187 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.187 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.187 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.187 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.187 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.187 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.187 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.187 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.447 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.706 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.706 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.706 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.706 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.706 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.706 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.706 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.706 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.706 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.706 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.706 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.706 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.706 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.706 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.965 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.966 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.966 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.966 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.966 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.966 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.966 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.225 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.484 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.484 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.484 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.484 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.484 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.484 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.484 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.484 11:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.746 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.067 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.067 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.067 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.067 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.067 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.067 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.067 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.067 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.067 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.067 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.067 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.068 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.068 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.068 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.068 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.068 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.068 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.068 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.068 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.068 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.068 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.068 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.068 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.068 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.068 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.068 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.068 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.068 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.068 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.378 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.378 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.378 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.378 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.378 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.378 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.378 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.378 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.378 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.378 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.378 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.637 11:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.637 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.637 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.637 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.637 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.637 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.637 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.637 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.637 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.904 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:53.163 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.163 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:53.163 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.164 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.164 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.164 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.164 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.164 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.423 11:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.682 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.941 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:53.941 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.941 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.941 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.941 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.941 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.941 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.941 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:54.199 rmmod nvme_tcp 00:06:54.199 rmmod nvme_fabrics 00:06:54.199 rmmod nvme_keyring 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:54.199 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:54.200 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1866496 ']' 00:06:54.200 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1866496 00:06:54.200 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1866496 ']' 00:06:54.200 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1866496 00:06:54.200 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:54.200 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.200 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1866496 00:06:54.459 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:54.459 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:54.459 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1866496' 00:06:54.459 killing process with pid 1866496 00:06:54.459 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1866496 00:06:54.459 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1866496 00:06:54.459 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:54.459 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:54.459 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:54.459 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:54.459 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:06:54.459 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:54.459 11:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:06:54.459 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:54.459 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:54.459 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.459 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:54.459 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.000 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:57.000 00:06:57.000 real 0m46.891s 00:06:57.000 user 3m11.453s 00:06:57.000 sys 0m14.922s 00:06:57.000 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.000 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:57.000 ************************************ 00:06:57.000 END TEST nvmf_ns_hotplug_stress 00:06:57.000 ************************************ 00:06:57.000 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:57.000 11:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:57.000 11:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.000 11:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:57.000 ************************************ 00:06:57.000 START TEST nvmf_delete_subsystem 00:06:57.000 ************************************ 00:06:57.000 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:57.000 * Looking for test storage... 00:06:57.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.000 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:57.000 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:06:57.000 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:57.000 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:57.000 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.000 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.000 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.000 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.000 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.000 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:57.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.001 --rc genhtml_branch_coverage=1 00:06:57.001 --rc genhtml_function_coverage=1 00:06:57.001 --rc genhtml_legend=1 00:06:57.001 --rc geninfo_all_blocks=1 00:06:57.001 --rc geninfo_unexecuted_blocks=1 00:06:57.001 00:06:57.001 ' 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:57.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.001 --rc genhtml_branch_coverage=1 00:06:57.001 --rc genhtml_function_coverage=1 00:06:57.001 --rc genhtml_legend=1 00:06:57.001 --rc geninfo_all_blocks=1 00:06:57.001 --rc geninfo_unexecuted_blocks=1 00:06:57.001 00:06:57.001 ' 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:57.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.001 --rc genhtml_branch_coverage=1 00:06:57.001 --rc genhtml_function_coverage=1 00:06:57.001 --rc genhtml_legend=1 00:06:57.001 --rc geninfo_all_blocks=1 00:06:57.001 --rc geninfo_unexecuted_blocks=1 00:06:57.001 00:06:57.001 ' 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:57.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.001 --rc genhtml_branch_coverage=1 00:06:57.001 --rc genhtml_function_coverage=1 00:06:57.001 --rc genhtml_legend=1 00:06:57.001 --rc geninfo_all_blocks=1 00:06:57.001 --rc geninfo_unexecuted_blocks=1 00:06:57.001 00:06:57.001 ' 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:57.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.001 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:57.002 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:57.002 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:57.002 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:02.278 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:02.278 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:02.278 Found net devices under 0000:af:00.0: cvl_0_0 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:02.278 Found net devices under 0000:af:00.1: cvl_0_1 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:02.278 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:02.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:02.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:07:02.279 00:07:02.279 --- 10.0.0.2 ping statistics --- 00:07:02.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.279 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:02.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:02.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:07:02.279 00:07:02.279 --- 10.0.0.1 ping statistics --- 00:07:02.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.279 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1876545 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1876545 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1876545 ']' 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.279 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.279 [2024-10-06 11:01:59.702364] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:07:02.279 [2024-10-06 11:01:59.702410] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.279 [2024-10-06 11:01:59.760618] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.279 [2024-10-06 11:01:59.799433] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.279 [2024-10-06 11:01:59.799472] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.279 [2024-10-06 11:01:59.799478] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.279 [2024-10-06 11:01:59.799484] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.279 [2024-10-06 11:01:59.799489] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.279 [2024-10-06 11:01:59.800178] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.279 [2024-10-06 11:01:59.800180] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.538 [2024-10-06 11:01:59.930255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.538 [2024-10-06 11:01:59.946457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.538 NULL1 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.538 Delay0 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1876566 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:02.538 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:02.538 [2024-10-06 11:02:00.021098] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:04.443 11:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:04.443 11:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.443 11:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 [2024-10-06 11:02:02.191874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcb50 is same with the state(6) to be set 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 starting I/O failed: -6 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 [2024-10-06 11:02:02.192249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc288000c00 is same with the state(6) to be set 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Write completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.703 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:04.704 Read completed with error (sct=0, sc=8) 00:07:04.704 Write completed with error (sct=0, sc=8) 00:07:05.642 [2024-10-06 11:02:03.158769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ffa80 is same with the state(6) to be set 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Write completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Write completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Write completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Write completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Write completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Write completed with error (sct=0, sc=8) 00:07:05.642 Write completed with error (sct=0, sc=8) 00:07:05.642 Write completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Write completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Write completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Write completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 [2024-10-06 11:02:03.194186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fce80 is same with the state(6) to be set 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Write completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.642 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 [2024-10-06 11:02:03.194342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc820 is same with the state(6) to be set 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 [2024-10-06 11:02:03.194493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc320 is same with the state(6) to be set 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Read completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 Write completed with error (sct=0, sc=8) 00:07:05.643 [2024-10-06 11:02:03.194999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc28800d310 is same with the state(6) to be set 00:07:05.643 Initializing NVMe Controllers 00:07:05.643 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:05.643 Controller IO queue size 128, less than required. 00:07:05.643 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:05.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:05.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:05.643 Initialization complete. Launching workers. 00:07:05.643 ======================================================== 00:07:05.643 Latency(us) 00:07:05.643 Device Information : IOPS MiB/s Average min max 00:07:05.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 190.66 0.09 947094.86 488.59 1011422.83 00:07:05.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.89 0.08 867086.93 232.03 1011260.20 00:07:05.643 ======================================================== 00:07:05.643 Total : 348.54 0.17 910851.95 232.03 1011422.83 00:07:05.643 00:07:05.643 [2024-10-06 11:02:03.195837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ffa80 (9): Bad file descriptor 00:07:05.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:05.643 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.643 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:05.643 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1876566 00:07:05.643 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1876566 00:07:06.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1876566) - No such process 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1876566 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1876566 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1876566 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.212 [2024-10-06 11:02:03.728122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1877238 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1877238 00:07:06.212 11:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:06.471 [2024-10-06 11:02:03.793789] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:06.731 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:06.731 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1877238 00:07:06.731 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:07.299 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:07.299 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1877238 00:07:07.299 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:07.867 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:07.867 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1877238 00:07:07.867 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.435 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.435 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1877238 00:07:08.435 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.695 11:02:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.695 11:02:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1877238 00:07:08.695 11:02:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:09.266 11:02:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:09.266 11:02:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1877238 00:07:09.266 11:02:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:09.526 Initializing NVMe Controllers 00:07:09.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:09.526 Controller IO queue size 128, less than required. 00:07:09.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:09.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:09.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:09.526 Initialization complete. Launching workers. 00:07:09.526 ======================================================== 00:07:09.526 Latency(us) 00:07:09.526 Device Information : IOPS MiB/s Average min max 00:07:09.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003151.71 1000176.88 1041219.50 00:07:09.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005013.51 1000352.34 1011989.98 00:07:09.526 ======================================================== 00:07:09.526 Total : 256.00 0.12 1004082.61 1000176.88 1041219.50 00:07:09.526 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1877238 00:07:09.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1877238) - No such process 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1877238 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:09.786 rmmod nvme_tcp 00:07:09.786 rmmod nvme_fabrics 00:07:09.786 rmmod nvme_keyring 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1876545 ']' 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1876545 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1876545 ']' 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1876545 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.786 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1876545 00:07:10.046 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.046 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.046 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1876545' 00:07:10.046 killing process with pid 1876545 00:07:10.046 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1876545 00:07:10.046 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1876545 00:07:10.046 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:10.046 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:10.046 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:10.046 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:10.046 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:07:10.046 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:10.046 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:07:10.046 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:10.046 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:10.046 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.046 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.046 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:12.583 00:07:12.583 real 0m15.525s 00:07:12.583 user 0m29.052s 00:07:12.583 sys 0m5.028s 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.583 ************************************ 00:07:12.583 END TEST nvmf_delete_subsystem 00:07:12.583 ************************************ 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:12.583 ************************************ 00:07:12.583 START TEST nvmf_host_management 00:07:12.583 ************************************ 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:12.583 * Looking for test storage... 00:07:12.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.583 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:12.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.583 --rc genhtml_branch_coverage=1 00:07:12.583 --rc genhtml_function_coverage=1 00:07:12.583 --rc genhtml_legend=1 00:07:12.584 --rc geninfo_all_blocks=1 00:07:12.584 --rc geninfo_unexecuted_blocks=1 00:07:12.584 00:07:12.584 ' 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:12.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.584 --rc genhtml_branch_coverage=1 00:07:12.584 --rc genhtml_function_coverage=1 00:07:12.584 --rc genhtml_legend=1 00:07:12.584 --rc geninfo_all_blocks=1 00:07:12.584 --rc geninfo_unexecuted_blocks=1 00:07:12.584 00:07:12.584 ' 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:12.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.584 --rc genhtml_branch_coverage=1 00:07:12.584 --rc genhtml_function_coverage=1 00:07:12.584 --rc genhtml_legend=1 00:07:12.584 --rc geninfo_all_blocks=1 00:07:12.584 --rc geninfo_unexecuted_blocks=1 00:07:12.584 00:07:12.584 ' 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:12.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.584 --rc genhtml_branch_coverage=1 00:07:12.584 --rc genhtml_function_coverage=1 00:07:12.584 --rc genhtml_legend=1 00:07:12.584 --rc geninfo_all_blocks=1 00:07:12.584 --rc geninfo_unexecuted_blocks=1 00:07:12.584 00:07:12.584 ' 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:12.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:12.584 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:17.862 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:17.863 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:17.863 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:17.863 Found net devices under 0000:af:00.0: cvl_0_0 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:17.863 Found net devices under 0000:af:00.1: cvl_0_1 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:17.863 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:18.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:07:18.122 00:07:18.122 --- 10.0.0.2 ping statistics --- 00:07:18.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.122 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:07:18.122 00:07:18.122 --- 10.0.0.1 ping statistics --- 00:07:18.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.122 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1881335 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1881335 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1881335 ']' 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.122 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.123 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.123 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.123 [2024-10-06 11:02:15.590237] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:07:18.123 [2024-10-06 11:02:15.590288] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.123 [2024-10-06 11:02:15.650503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.123 [2024-10-06 11:02:15.692074] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.123 [2024-10-06 11:02:15.692116] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.123 [2024-10-06 11:02:15.692123] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.123 [2024-10-06 11:02:15.692130] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.123 [2024-10-06 11:02:15.692135] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.123 [2024-10-06 11:02:15.693692] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.123 [2024-10-06 11:02:15.693712] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.123 [2024-10-06 11:02:15.693821] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.123 [2024-10-06 11:02:15.693822] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.382 [2024-10-06 11:02:15.841430] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.382 Malloc0 00:07:18.382 [2024-10-06 11:02:15.900721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1881429 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1881429 /var/tmp/bdevperf.sock 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1881429 ']' 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:18.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:18.382 { 00:07:18.382 "params": { 00:07:18.382 "name": "Nvme$subsystem", 00:07:18.382 "trtype": "$TEST_TRANSPORT", 00:07:18.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:18.382 "adrfam": "ipv4", 00:07:18.382 "trsvcid": "$NVMF_PORT", 00:07:18.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:18.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:18.382 "hdgst": ${hdgst:-false}, 00:07:18.382 "ddgst": ${ddgst:-false} 00:07:18.382 }, 00:07:18.382 "method": "bdev_nvme_attach_controller" 00:07:18.382 } 00:07:18.382 EOF 00:07:18.382 )") 00:07:18.382 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:18.641 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:18.641 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:18.641 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:18.641 "params": { 00:07:18.641 "name": "Nvme0", 00:07:18.641 "trtype": "tcp", 00:07:18.641 "traddr": "10.0.0.2", 00:07:18.641 "adrfam": "ipv4", 00:07:18.641 "trsvcid": "4420", 00:07:18.641 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:18.641 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:18.641 "hdgst": false, 00:07:18.641 "ddgst": false 00:07:18.641 }, 00:07:18.641 "method": "bdev_nvme_attach_controller" 00:07:18.641 }' 00:07:18.641 [2024-10-06 11:02:15.996617] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:07:18.641 [2024-10-06 11:02:15.996662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1881429 ] 00:07:18.641 [2024-10-06 11:02:16.053395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.641 [2024-10-06 11:02:16.092393] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.900 Running I/O for 10 seconds... 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:18.900 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:19.159 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:19.159 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:19.159 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:19.160 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:19.160 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.160 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:19.420 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.421 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=641 00:07:19.421 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 641 -ge 100 ']' 00:07:19.421 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:19.421 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:19.421 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:19.421 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:19.421 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.421 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:19.421 [2024-10-06 11:02:16.775612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.775943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630a0 is same with the state(6) to be set 00:07:19.421 [2024-10-06 11:02:16.776104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.421 [2024-10-06 11:02:16.776138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.421 [2024-10-06 11:02:16.776154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.421 [2024-10-06 11:02:16.776162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.421 [2024-10-06 11:02:16.776171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.421 [2024-10-06 11:02:16.776178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.421 [2024-10-06 11:02:16.776186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.421 [2024-10-06 11:02:16.776193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.421 [2024-10-06 11:02:16.776202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.421 [2024-10-06 11:02:16.776208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.421 [2024-10-06 11:02:16.776216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.421 [2024-10-06 11:02:16.776223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.421 [2024-10-06 11:02:16.776231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.421 [2024-10-06 11:02:16.776237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.421 [2024-10-06 11:02:16.776245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.421 [2024-10-06 11:02:16.776251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.421 [2024-10-06 11:02:16.776260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.421 [2024-10-06 11:02:16.776267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.421 [2024-10-06 11:02:16.776275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.421 [2024-10-06 11:02:16.776281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.421 [2024-10-06 11:02:16.776289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.421 [2024-10-06 11:02:16.776296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.421 [2024-10-06 11:02:16.776308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.421 [2024-10-06 11:02:16.776315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.421 [2024-10-06 11:02:16.776324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.421 [2024-10-06 11:02:16.776330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.422 [2024-10-06 11:02:16.776908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.422 [2024-10-06 11:02:16.776916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.423 [2024-10-06 11:02:16.776923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.423 [2024-10-06 11:02:16.776931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.423 [2024-10-06 11:02:16.776939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.423 [2024-10-06 11:02:16.776947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.423 [2024-10-06 11:02:16.776954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.423 [2024-10-06 11:02:16.776962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.423 [2024-10-06 11:02:16.776968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.423 [2024-10-06 11:02:16.776976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.423 [2024-10-06 11:02:16.776983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.423 [2024-10-06 11:02:16.776991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.423 [2024-10-06 11:02:16.776997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.423 [2024-10-06 11:02:16.777005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.423 [2024-10-06 11:02:16.777012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.423 [2024-10-06 11:02:16.777020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.423 [2024-10-06 11:02:16.777026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.423 [2024-10-06 11:02:16.777034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.423 [2024-10-06 11:02:16.777043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.423 [2024-10-06 11:02:16.777051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.423 [2024-10-06 11:02:16.777064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.423 [2024-10-06 11:02:16.777073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.423 [2024-10-06 11:02:16.777081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.423 [2024-10-06 11:02:16.777089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.423 [2024-10-06 11:02:16.777095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.423 [2024-10-06 11:02:16.777121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:07:19.423 [2024-10-06 11:02:16.777173] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2171480 was disconnected and freed. reset controller. 00:07:19.423 [2024-10-06 11:02:16.778036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:19.423 task offset: 90112 on job bdev=Nvme0n1 fails 00:07:19.423 00:07:19.423 Latency(us) 00:07:19.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.423 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:19.423 Job: Nvme0n1 ended in about 0.41 seconds with error 00:07:19.423 Verification LBA range: start 0x0 length 0x400 00:07:19.423 Nvme0n1 : 0.41 1722.29 107.64 156.57 0.00 33196.39 3011.54 29709.65 00:07:19.423 =================================================================================================================== 00:07:19.423 Total : 1722.29 107.64 156.57 0.00 33196.39 3011.54 29709.65 00:07:19.423 [2024-10-06 11:02:16.780391] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.423 [2024-10-06 11:02:16.780412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2174fc0 (9): Bad file descriptor 00:07:19.423 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.423 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:19.423 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.423 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:19.423 [2024-10-06 11:02:16.783433] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:19.423 [2024-10-06 11:02:16.783521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:19.423 [2024-10-06 11:02:16.783546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.423 [2024-10-06 11:02:16.783564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:19.423 [2024-10-06 11:02:16.783572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:19.423 [2024-10-06 11:02:16.783579] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:19.423 [2024-10-06 11:02:16.783586] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2174fc0 00:07:19.423 [2024-10-06 11:02:16.783608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2174fc0 (9): Bad file descriptor 00:07:19.423 [2024-10-06 11:02:16.783621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:07:19.423 [2024-10-06 11:02:16.783629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:07:19.423 [2024-10-06 11:02:16.783637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:07:19.423 [2024-10-06 11:02:16.783649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:19.423 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.423 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:20.360 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1881429 00:07:20.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1881429) - No such process 00:07:20.360 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:20.360 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:20.360 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:20.361 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:20.361 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:20.361 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:20.361 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:20.361 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:20.361 { 00:07:20.361 "params": { 00:07:20.361 "name": "Nvme$subsystem", 00:07:20.361 "trtype": "$TEST_TRANSPORT", 00:07:20.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:20.361 "adrfam": "ipv4", 00:07:20.361 "trsvcid": "$NVMF_PORT", 00:07:20.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:20.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:20.361 "hdgst": ${hdgst:-false}, 00:07:20.361 "ddgst": ${ddgst:-false} 00:07:20.361 }, 00:07:20.361 "method": "bdev_nvme_attach_controller" 00:07:20.361 } 00:07:20.361 EOF 00:07:20.361 )") 00:07:20.361 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:20.361 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:20.361 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:20.361 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:20.361 "params": { 00:07:20.361 "name": "Nvme0", 00:07:20.361 "trtype": "tcp", 00:07:20.361 "traddr": "10.0.0.2", 00:07:20.361 "adrfam": "ipv4", 00:07:20.361 "trsvcid": "4420", 00:07:20.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:20.361 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:20.361 "hdgst": false, 00:07:20.361 "ddgst": false 00:07:20.361 }, 00:07:20.361 "method": "bdev_nvme_attach_controller" 00:07:20.361 }' 00:07:20.361 [2024-10-06 11:02:17.848663] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:07:20.361 [2024-10-06 11:02:17.848711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1881679 ] 00:07:20.361 [2024-10-06 11:02:17.905034] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.620 [2024-10-06 11:02:17.942136] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.880 Running I/O for 1 seconds... 00:07:21.817 1732.00 IOPS, 108.25 MiB/s 00:07:21.817 Latency(us) 00:07:21.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.817 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:21.817 Verification LBA range: start 0x0 length 0x400 00:07:21.817 Nvme0n1 : 1.01 1782.19 111.39 0.00 0.00 35383.71 7708.28 30458.64 00:07:21.817 =================================================================================================================== 00:07:21.817 Total : 1782.19 111.39 0.00 0.00 35383.71 7708.28 30458.64 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:22.077 rmmod nvme_tcp 00:07:22.077 rmmod nvme_fabrics 00:07:22.077 rmmod nvme_keyring 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1881335 ']' 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1881335 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1881335 ']' 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1881335 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1881335 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1881335' 00:07:22.077 killing process with pid 1881335 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1881335 00:07:22.077 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1881335 00:07:22.337 [2024-10-06 11:02:19.714755] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:22.337 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:22.337 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:22.337 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:22.337 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:22.337 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:07:22.337 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:22.337 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:07:22.337 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:22.337 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:22.337 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.337 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:22.337 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.876 11:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:24.876 11:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:24.876 00:07:24.876 real 0m12.088s 00:07:24.876 user 0m20.002s 00:07:24.876 sys 0m5.275s 00:07:24.876 11:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.876 11:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.876 ************************************ 00:07:24.876 END TEST nvmf_host_management 00:07:24.876 ************************************ 00:07:24.876 11:02:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:24.876 11:02:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:24.876 11:02:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.876 11:02:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:24.876 ************************************ 00:07:24.876 START TEST nvmf_lvol 00:07:24.876 ************************************ 00:07:24.876 11:02:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:24.876 * Looking for test storage... 00:07:24.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.876 11:02:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:24.876 11:02:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:07:24.876 11:02:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:24.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.876 --rc genhtml_branch_coverage=1 00:07:24.876 --rc genhtml_function_coverage=1 00:07:24.876 --rc genhtml_legend=1 00:07:24.876 --rc geninfo_all_blocks=1 00:07:24.876 --rc geninfo_unexecuted_blocks=1 00:07:24.876 00:07:24.876 ' 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:24.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.876 --rc genhtml_branch_coverage=1 00:07:24.876 --rc genhtml_function_coverage=1 00:07:24.876 --rc genhtml_legend=1 00:07:24.876 --rc geninfo_all_blocks=1 00:07:24.876 --rc geninfo_unexecuted_blocks=1 00:07:24.876 00:07:24.876 ' 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:24.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.876 --rc genhtml_branch_coverage=1 00:07:24.876 --rc genhtml_function_coverage=1 00:07:24.876 --rc genhtml_legend=1 00:07:24.876 --rc geninfo_all_blocks=1 00:07:24.876 --rc geninfo_unexecuted_blocks=1 00:07:24.876 00:07:24.876 ' 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:24.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.876 --rc genhtml_branch_coverage=1 00:07:24.876 --rc genhtml_function_coverage=1 00:07:24.876 --rc genhtml_legend=1 00:07:24.876 --rc geninfo_all_blocks=1 00:07:24.876 --rc geninfo_unexecuted_blocks=1 00:07:24.876 00:07:24.876 ' 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.876 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:24.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:24.877 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:30.156 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.156 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:30.157 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:30.157 Found net devices under 0000:af:00.0: cvl_0_0 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:30.157 Found net devices under 0000:af:00.1: cvl_0_1 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:30.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:07:30.157 00:07:30.157 --- 10.0.0.2 ping statistics --- 00:07:30.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.157 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:30.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:07:30.157 00:07:30.157 --- 10.0.0.1 ping statistics --- 00:07:30.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.157 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1885442 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1885442 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1885442 ']' 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.157 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.157 [2024-10-06 11:02:27.606462] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:07:30.157 [2024-10-06 11:02:27.606506] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.157 [2024-10-06 11:02:27.667801] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.157 [2024-10-06 11:02:27.707064] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.157 [2024-10-06 11:02:27.707105] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.157 [2024-10-06 11:02:27.707116] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.157 [2024-10-06 11:02:27.707122] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.157 [2024-10-06 11:02:27.707127] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.157 [2024-10-06 11:02:27.708033] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.157 [2024-10-06 11:02:27.708055] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.157 [2024-10-06 11:02:27.708067] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.417 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.417 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:30.417 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:30.417 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:30.417 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.417 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.417 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:30.676 [2024-10-06 11:02:28.010953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.676 11:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:30.676 11:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:30.676 11:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:30.936 11:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:30.936 11:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:31.195 11:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:31.454 11:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2b68805e-1adf-4994-953e-f13ae74681bb 00:07:31.454 11:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2b68805e-1adf-4994-953e-f13ae74681bb lvol 20 00:07:31.713 11:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f7fb360d-78c4-4fb0-8b91-32a796e1f3bd 00:07:31.713 11:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:31.713 11:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f7fb360d-78c4-4fb0-8b91-32a796e1f3bd 00:07:31.972 11:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:32.231 [2024-10-06 11:02:29.616882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.231 11:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:32.491 11:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1885861 00:07:32.491 11:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:32.491 11:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:33.427 11:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f7fb360d-78c4-4fb0-8b91-32a796e1f3bd MY_SNAPSHOT 00:07:33.686 11:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=557db2eb-7728-4c13-9b0c-903e9d5b2329 00:07:33.686 11:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f7fb360d-78c4-4fb0-8b91-32a796e1f3bd 30 00:07:33.944 11:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 557db2eb-7728-4c13-9b0c-903e9d5b2329 MY_CLONE 00:07:34.202 11:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bed11eca-ba99-40a3-9ef9-18670b5b05b6 00:07:34.202 11:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bed11eca-ba99-40a3-9ef9-18670b5b05b6 00:07:34.771 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1885861 00:07:42.893 Initializing NVMe Controllers 00:07:42.893 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:42.893 Controller IO queue size 128, less than required. 00:07:42.893 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:42.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:42.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:42.893 Initialization complete. Launching workers. 00:07:42.893 ======================================================== 00:07:42.893 Latency(us) 00:07:42.893 Device Information : IOPS MiB/s Average min max 00:07:42.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12169.10 47.54 10517.46 2002.24 68054.54 00:07:42.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12081.90 47.19 10594.85 3588.06 46933.12 00:07:42.893 ======================================================== 00:07:42.893 Total : 24251.00 94.73 10556.02 2002.24 68054.54 00:07:42.893 00:07:42.893 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:42.893 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f7fb360d-78c4-4fb0-8b91-32a796e1f3bd 00:07:43.152 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2b68805e-1adf-4994-953e-f13ae74681bb 00:07:43.411 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:43.411 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:43.412 rmmod nvme_tcp 00:07:43.412 rmmod nvme_fabrics 00:07:43.412 rmmod nvme_keyring 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1885442 ']' 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1885442 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1885442 ']' 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1885442 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1885442 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1885442' 00:07:43.412 killing process with pid 1885442 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1885442 00:07:43.412 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1885442 00:07:43.671 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:43.671 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:43.671 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:43.671 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:43.671 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:07:43.671 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:07:43.671 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:43.671 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:43.671 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:43.671 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.671 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.671 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:46.209 00:07:46.209 real 0m21.317s 00:07:46.209 user 1m2.649s 00:07:46.209 sys 0m7.266s 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:46.209 ************************************ 00:07:46.209 END TEST nvmf_lvol 00:07:46.209 ************************************ 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:46.209 ************************************ 00:07:46.209 START TEST nvmf_lvs_grow 00:07:46.209 ************************************ 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:46.209 * Looking for test storage... 00:07:46.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.209 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:46.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.210 --rc genhtml_branch_coverage=1 00:07:46.210 --rc genhtml_function_coverage=1 00:07:46.210 --rc genhtml_legend=1 00:07:46.210 --rc geninfo_all_blocks=1 00:07:46.210 --rc geninfo_unexecuted_blocks=1 00:07:46.210 00:07:46.210 ' 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:46.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.210 --rc genhtml_branch_coverage=1 00:07:46.210 --rc genhtml_function_coverage=1 00:07:46.210 --rc genhtml_legend=1 00:07:46.210 --rc geninfo_all_blocks=1 00:07:46.210 --rc geninfo_unexecuted_blocks=1 00:07:46.210 00:07:46.210 ' 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:46.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.210 --rc genhtml_branch_coverage=1 00:07:46.210 --rc genhtml_function_coverage=1 00:07:46.210 --rc genhtml_legend=1 00:07:46.210 --rc geninfo_all_blocks=1 00:07:46.210 --rc geninfo_unexecuted_blocks=1 00:07:46.210 00:07:46.210 ' 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:46.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.210 --rc genhtml_branch_coverage=1 00:07:46.210 --rc genhtml_function_coverage=1 00:07:46.210 --rc genhtml_legend=1 00:07:46.210 --rc geninfo_all_blocks=1 00:07:46.210 --rc geninfo_unexecuted_blocks=1 00:07:46.210 00:07:46.210 ' 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:46.210 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:51.592 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:51.592 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:51.592 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:51.593 Found net devices under 0000:af:00.0: cvl_0_0 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:51.593 Found net devices under 0000:af:00.1: cvl_0_1 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:51.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:07:51.593 00:07:51.593 --- 10.0.0.2 ping statistics --- 00:07:51.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.593 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:07:51.593 00:07:51.593 --- 10.0.0.1 ping statistics --- 00:07:51.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.593 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1891137 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1891137 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1891137 ']' 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.593 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:51.593 [2024-10-06 11:02:48.916826] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:07:51.593 [2024-10-06 11:02:48.916866] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.593 [2024-10-06 11:02:48.973103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.593 [2024-10-06 11:02:49.012188] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.593 [2024-10-06 11:02:49.012228] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.593 [2024-10-06 11:02:49.012235] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.593 [2024-10-06 11:02:49.012241] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.593 [2024-10-06 11:02:49.012246] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.593 [2024-10-06 11:02:49.012730] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.594 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.594 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:51.594 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:51.594 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:51.594 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:51.594 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.594 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:51.853 [2024-10-06 11:02:49.310905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.853 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:51.853 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.853 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.853 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:51.853 ************************************ 00:07:51.853 START TEST lvs_grow_clean 00:07:51.853 ************************************ 00:07:51.853 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:51.853 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:51.853 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:51.853 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:51.853 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:51.853 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:51.853 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:51.853 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:51.853 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:51.854 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:52.112 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:52.112 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:52.371 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3444167c-e5c3-4efe-a774-56dad3eeb123 00:07:52.371 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3444167c-e5c3-4efe-a774-56dad3eeb123 00:07:52.371 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:52.629 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:52.629 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:52.629 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3444167c-e5c3-4efe-a774-56dad3eeb123 lvol 150 00:07:52.629 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=85458ee7-8e3e-44c3-8091-136c82b66451 00:07:52.629 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:52.629 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:52.887 [2024-10-06 11:02:50.329919] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:52.887 [2024-10-06 11:02:50.329970] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:52.887 true 00:07:52.887 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3444167c-e5c3-4efe-a774-56dad3eeb123 00:07:52.887 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:53.145 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:53.145 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:53.403 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 85458ee7-8e3e-44c3-8091-136c82b66451 00:07:53.403 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:53.661 [2024-10-06 11:02:51.084168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.661 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.919 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:53.919 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1891620 00:07:53.919 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:53.919 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1891620 /var/tmp/bdevperf.sock 00:07:53.919 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1891620 ']' 00:07:53.919 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:53.919 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.919 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:53.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:53.919 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.919 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:53.919 [2024-10-06 11:02:51.299311] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:07:53.919 [2024-10-06 11:02:51.299354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1891620 ] 00:07:53.919 [2024-10-06 11:02:51.355011] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.919 [2024-10-06 11:02:51.395793] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.919 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.919 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:53.919 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:54.486 Nvme0n1 00:07:54.486 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:54.745 [ 00:07:54.745 { 00:07:54.745 "name": "Nvme0n1", 00:07:54.745 "aliases": [ 00:07:54.745 "85458ee7-8e3e-44c3-8091-136c82b66451" 00:07:54.745 ], 00:07:54.745 "product_name": "NVMe disk", 00:07:54.745 "block_size": 4096, 00:07:54.745 "num_blocks": 38912, 00:07:54.745 "uuid": "85458ee7-8e3e-44c3-8091-136c82b66451", 00:07:54.745 "numa_id": 1, 00:07:54.745 "assigned_rate_limits": { 00:07:54.745 "rw_ios_per_sec": 0, 00:07:54.745 "rw_mbytes_per_sec": 0, 00:07:54.745 "r_mbytes_per_sec": 0, 00:07:54.745 "w_mbytes_per_sec": 0 00:07:54.745 }, 00:07:54.745 "claimed": false, 00:07:54.745 "zoned": false, 00:07:54.745 "supported_io_types": { 00:07:54.745 "read": true, 00:07:54.745 "write": true, 00:07:54.745 "unmap": true, 00:07:54.745 "flush": true, 00:07:54.745 "reset": true, 00:07:54.745 "nvme_admin": true, 00:07:54.745 "nvme_io": true, 00:07:54.745 "nvme_io_md": false, 00:07:54.745 "write_zeroes": true, 00:07:54.745 "zcopy": false, 00:07:54.745 "get_zone_info": false, 00:07:54.745 "zone_management": false, 00:07:54.745 "zone_append": false, 00:07:54.745 "compare": true, 00:07:54.745 "compare_and_write": true, 00:07:54.745 "abort": true, 00:07:54.745 "seek_hole": false, 00:07:54.745 "seek_data": false, 00:07:54.745 "copy": true, 00:07:54.745 "nvme_iov_md": false 00:07:54.745 }, 00:07:54.745 "memory_domains": [ 00:07:54.745 { 00:07:54.745 "dma_device_id": "system", 00:07:54.745 "dma_device_type": 1 00:07:54.745 } 00:07:54.745 ], 00:07:54.745 "driver_specific": { 00:07:54.745 "nvme": [ 00:07:54.745 { 00:07:54.745 "trid": { 00:07:54.745 "trtype": "TCP", 00:07:54.745 "adrfam": "IPv4", 00:07:54.745 "traddr": "10.0.0.2", 00:07:54.745 "trsvcid": "4420", 00:07:54.745 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:54.745 }, 00:07:54.745 "ctrlr_data": { 00:07:54.745 "cntlid": 1, 00:07:54.745 "vendor_id": "0x8086", 00:07:54.745 "model_number": "SPDK bdev Controller", 00:07:54.745 "serial_number": "SPDK0", 00:07:54.745 "firmware_revision": "25.01", 00:07:54.745 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:54.745 "oacs": { 00:07:54.745 "security": 0, 00:07:54.745 "format": 0, 00:07:54.745 "firmware": 0, 00:07:54.745 "ns_manage": 0 00:07:54.745 }, 00:07:54.745 "multi_ctrlr": true, 00:07:54.745 "ana_reporting": false 00:07:54.745 }, 00:07:54.745 "vs": { 00:07:54.745 "nvme_version": "1.3" 00:07:54.745 }, 00:07:54.745 "ns_data": { 00:07:54.745 "id": 1, 00:07:54.745 "can_share": true 00:07:54.745 } 00:07:54.745 } 00:07:54.745 ], 00:07:54.745 "mp_policy": "active_passive" 00:07:54.745 } 00:07:54.745 } 00:07:54.745 ] 00:07:54.745 11:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:54.745 11:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1891812 00:07:54.745 11:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:54.745 Running I/O for 10 seconds... 00:07:55.683 Latency(us) 00:07:55.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.683 Nvme0n1 : 1.00 23122.00 90.32 0.00 0.00 0.00 0.00 0.00 00:07:55.683 =================================================================================================================== 00:07:55.683 Total : 23122.00 90.32 0.00 0.00 0.00 0.00 0.00 00:07:55.683 00:07:56.620 11:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3444167c-e5c3-4efe-a774-56dad3eeb123 00:07:56.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.620 Nvme0n1 : 2.00 23254.50 90.84 0.00 0.00 0.00 0.00 0.00 00:07:56.620 =================================================================================================================== 00:07:56.620 Total : 23254.50 90.84 0.00 0.00 0.00 0.00 0.00 00:07:56.620 00:07:56.880 true 00:07:56.880 11:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3444167c-e5c3-4efe-a774-56dad3eeb123 00:07:56.880 11:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:57.139 11:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:57.139 11:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:57.139 11:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1891812 00:07:57.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.708 Nvme0n1 : 3.00 23295.00 91.00 0.00 0.00 0.00 0.00 0.00 00:07:57.708 =================================================================================================================== 00:07:57.708 Total : 23295.00 91.00 0.00 0.00 0.00 0.00 0.00 00:07:57.708 00:07:58.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.645 Nvme0n1 : 4.00 23341.00 91.18 0.00 0.00 0.00 0.00 0.00 00:07:58.645 =================================================================================================================== 00:07:58.645 Total : 23341.00 91.18 0.00 0.00 0.00 0.00 0.00 00:07:58.645 00:08:00.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.022 Nvme0n1 : 5.00 23401.00 91.41 0.00 0.00 0.00 0.00 0.00 00:08:00.022 =================================================================================================================== 00:08:00.022 Total : 23401.00 91.41 0.00 0.00 0.00 0.00 0.00 00:08:00.022 00:08:00.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.960 Nvme0n1 : 6.00 23441.17 91.57 0.00 0.00 0.00 0.00 0.00 00:08:00.960 =================================================================================================================== 00:08:00.960 Total : 23441.17 91.57 0.00 0.00 0.00 0.00 0.00 00:08:00.960 00:08:01.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.898 Nvme0n1 : 7.00 23467.86 91.67 0.00 0.00 0.00 0.00 0.00 00:08:01.898 =================================================================================================================== 00:08:01.898 Total : 23467.86 91.67 0.00 0.00 0.00 0.00 0.00 00:08:01.898 00:08:02.835 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.835 Nvme0n1 : 8.00 23452.50 91.61 0.00 0.00 0.00 0.00 0.00 00:08:02.835 =================================================================================================================== 00:08:02.835 Total : 23452.50 91.61 0.00 0.00 0.00 0.00 0.00 00:08:02.835 00:08:03.774 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.774 Nvme0n1 : 9.00 23432.00 91.53 0.00 0.00 0.00 0.00 0.00 00:08:03.774 =================================================================================================================== 00:08:03.774 Total : 23432.00 91.53 0.00 0.00 0.00 0.00 0.00 00:08:03.774 00:08:04.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.713 Nvme0n1 : 10.00 23430.10 91.52 0.00 0.00 0.00 0.00 0.00 00:08:04.713 =================================================================================================================== 00:08:04.713 Total : 23430.10 91.52 0.00 0.00 0.00 0.00 0.00 00:08:04.713 00:08:04.713 00:08:04.713 Latency(us) 00:08:04.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.713 Nvme0n1 : 10.01 23429.16 91.52 0.00 0.00 5460.41 3183.18 12358.22 00:08:04.713 =================================================================================================================== 00:08:04.713 Total : 23429.16 91.52 0.00 0.00 5460.41 3183.18 12358.22 00:08:04.713 { 00:08:04.713 "results": [ 00:08:04.713 { 00:08:04.713 "job": "Nvme0n1", 00:08:04.713 "core_mask": "0x2", 00:08:04.713 "workload": "randwrite", 00:08:04.713 "status": "finished", 00:08:04.713 "queue_depth": 128, 00:08:04.713 "io_size": 4096, 00:08:04.713 "runtime": 10.005864, 00:08:04.713 "iops": 23429.16113990756, 00:08:04.713 "mibps": 91.5201607027639, 00:08:04.713 "io_failed": 0, 00:08:04.713 "io_timeout": 0, 00:08:04.713 "avg_latency_us": 5460.407271276571, 00:08:04.713 "min_latency_us": 3183.177142857143, 00:08:04.713 "max_latency_us": 12358.217142857144 00:08:04.713 } 00:08:04.713 ], 00:08:04.713 "core_count": 1 00:08:04.713 } 00:08:04.713 11:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1891620 00:08:04.713 11:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1891620 ']' 00:08:04.713 11:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1891620 00:08:04.713 11:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:04.713 11:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.713 11:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1891620 00:08:04.713 11:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:04.713 11:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:04.713 11:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1891620' 00:08:04.713 killing process with pid 1891620 00:08:04.713 11:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1891620 00:08:04.713 Received shutdown signal, test time was about 10.000000 seconds 00:08:04.713 00:08:04.713 Latency(us) 00:08:04.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.713 =================================================================================================================== 00:08:04.713 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:04.713 11:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1891620 00:08:04.972 11:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.232 11:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:05.492 11:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3444167c-e5c3-4efe-a774-56dad3eeb123 00:08:05.492 11:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:05.492 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:05.492 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:05.492 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:05.753 [2024-10-06 11:03:03.227397] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:05.753 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3444167c-e5c3-4efe-a774-56dad3eeb123 00:08:05.753 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:05.753 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3444167c-e5c3-4efe-a774-56dad3eeb123 00:08:05.753 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:05.753 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.753 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:05.753 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.753 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:05.753 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.753 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:05.753 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:05.753 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3444167c-e5c3-4efe-a774-56dad3eeb123 00:08:06.012 request: 00:08:06.012 { 00:08:06.012 "uuid": "3444167c-e5c3-4efe-a774-56dad3eeb123", 00:08:06.012 "method": "bdev_lvol_get_lvstores", 00:08:06.012 "req_id": 1 00:08:06.012 } 00:08:06.012 Got JSON-RPC error response 00:08:06.012 response: 00:08:06.013 { 00:08:06.013 "code": -19, 00:08:06.013 "message": "No such device" 00:08:06.013 } 00:08:06.013 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:06.013 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:06.013 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:06.013 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:06.013 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:06.272 aio_bdev 00:08:06.272 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 85458ee7-8e3e-44c3-8091-136c82b66451 00:08:06.272 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=85458ee7-8e3e-44c3-8091-136c82b66451 00:08:06.272 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:06.272 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:06.272 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:06.272 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:06.272 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:06.272 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 85458ee7-8e3e-44c3-8091-136c82b66451 -t 2000 00:08:06.531 [ 00:08:06.531 { 00:08:06.531 "name": "85458ee7-8e3e-44c3-8091-136c82b66451", 00:08:06.531 "aliases": [ 00:08:06.531 "lvs/lvol" 00:08:06.531 ], 00:08:06.531 "product_name": "Logical Volume", 00:08:06.531 "block_size": 4096, 00:08:06.531 "num_blocks": 38912, 00:08:06.531 "uuid": "85458ee7-8e3e-44c3-8091-136c82b66451", 00:08:06.531 "assigned_rate_limits": { 00:08:06.531 "rw_ios_per_sec": 0, 00:08:06.531 "rw_mbytes_per_sec": 0, 00:08:06.531 "r_mbytes_per_sec": 0, 00:08:06.531 "w_mbytes_per_sec": 0 00:08:06.531 }, 00:08:06.531 "claimed": false, 00:08:06.531 "zoned": false, 00:08:06.531 "supported_io_types": { 00:08:06.531 "read": true, 00:08:06.531 "write": true, 00:08:06.531 "unmap": true, 00:08:06.531 "flush": false, 00:08:06.531 "reset": true, 00:08:06.531 "nvme_admin": false, 00:08:06.531 "nvme_io": false, 00:08:06.531 "nvme_io_md": false, 00:08:06.531 "write_zeroes": true, 00:08:06.531 "zcopy": false, 00:08:06.531 "get_zone_info": false, 00:08:06.531 "zone_management": false, 00:08:06.531 "zone_append": false, 00:08:06.531 "compare": false, 00:08:06.531 "compare_and_write": false, 00:08:06.531 "abort": false, 00:08:06.531 "seek_hole": true, 00:08:06.531 "seek_data": true, 00:08:06.531 "copy": false, 00:08:06.531 "nvme_iov_md": false 00:08:06.531 }, 00:08:06.531 "driver_specific": { 00:08:06.531 "lvol": { 00:08:06.531 "lvol_store_uuid": "3444167c-e5c3-4efe-a774-56dad3eeb123", 00:08:06.531 "base_bdev": "aio_bdev", 00:08:06.532 "thin_provision": false, 00:08:06.532 "num_allocated_clusters": 38, 00:08:06.532 "snapshot": false, 00:08:06.532 "clone": false, 00:08:06.532 "esnap_clone": false 00:08:06.532 } 00:08:06.532 } 00:08:06.532 } 00:08:06.532 ] 00:08:06.532 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:06.532 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:06.532 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3444167c-e5c3-4efe-a774-56dad3eeb123 00:08:06.790 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:06.791 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3444167c-e5c3-4efe-a774-56dad3eeb123 00:08:06.791 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:07.050 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:07.050 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 85458ee7-8e3e-44c3-8091-136c82b66451 00:08:07.050 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3444167c-e5c3-4efe-a774-56dad3eeb123 00:08:07.310 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:07.569 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.569 00:08:07.569 real 0m15.652s 00:08:07.569 user 0m15.238s 00:08:07.569 sys 0m1.440s 00:08:07.569 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.570 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:07.570 ************************************ 00:08:07.570 END TEST lvs_grow_clean 00:08:07.570 ************************************ 00:08:07.570 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:07.570 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:07.570 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.570 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:07.570 ************************************ 00:08:07.570 START TEST lvs_grow_dirty 00:08:07.570 ************************************ 00:08:07.570 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:07.570 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:07.570 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:07.570 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:07.570 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:07.570 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:07.570 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:07.570 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.570 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.570 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:07.829 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:07.829 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:08.088 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=eda18c91-43cb-4e42-9099-ee370cbc2e97 00:08:08.088 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda18c91-43cb-4e42-9099-ee370cbc2e97 00:08:08.088 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:08.347 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:08.347 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:08.347 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eda18c91-43cb-4e42-9099-ee370cbc2e97 lvol 150 00:08:08.347 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e9423fd2-53ca-4d12-9cff-b9b8b3280d96 00:08:08.347 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:08.347 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:08.607 [2024-10-06 11:03:06.049464] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:08.607 [2024-10-06 11:03:06.049513] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:08.607 true 00:08:08.607 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda18c91-43cb-4e42-9099-ee370cbc2e97 00:08:08.607 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:08.866 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:08.866 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:09.125 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e9423fd2-53ca-4d12-9cff-b9b8b3280d96 00:08:09.125 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:09.384 [2024-10-06 11:03:06.779639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.384 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.643 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1894398 00:08:09.643 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:09.643 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:09.643 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1894398 /var/tmp/bdevperf.sock 00:08:09.643 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1894398 ']' 00:08:09.643 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:09.643 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.643 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:09.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:09.643 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.643 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.643 [2024-10-06 11:03:07.018536] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:08:09.643 [2024-10-06 11:03:07.018580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1894398 ] 00:08:09.643 [2024-10-06 11:03:07.073603] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.643 [2024-10-06 11:03:07.112230] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.644 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.644 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:09.644 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:10.211 Nvme0n1 00:08:10.211 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:10.211 [ 00:08:10.211 { 00:08:10.211 "name": "Nvme0n1", 00:08:10.211 "aliases": [ 00:08:10.211 "e9423fd2-53ca-4d12-9cff-b9b8b3280d96" 00:08:10.211 ], 00:08:10.211 "product_name": "NVMe disk", 00:08:10.211 "block_size": 4096, 00:08:10.211 "num_blocks": 38912, 00:08:10.211 "uuid": "e9423fd2-53ca-4d12-9cff-b9b8b3280d96", 00:08:10.211 "numa_id": 1, 00:08:10.211 "assigned_rate_limits": { 00:08:10.211 "rw_ios_per_sec": 0, 00:08:10.211 "rw_mbytes_per_sec": 0, 00:08:10.211 "r_mbytes_per_sec": 0, 00:08:10.211 "w_mbytes_per_sec": 0 00:08:10.211 }, 00:08:10.211 "claimed": false, 00:08:10.211 "zoned": false, 00:08:10.211 "supported_io_types": { 00:08:10.211 "read": true, 00:08:10.211 "write": true, 00:08:10.211 "unmap": true, 00:08:10.211 "flush": true, 00:08:10.211 "reset": true, 00:08:10.211 "nvme_admin": true, 00:08:10.211 "nvme_io": true, 00:08:10.211 "nvme_io_md": false, 00:08:10.211 "write_zeroes": true, 00:08:10.211 "zcopy": false, 00:08:10.211 "get_zone_info": false, 00:08:10.211 "zone_management": false, 00:08:10.212 "zone_append": false, 00:08:10.212 "compare": true, 00:08:10.212 "compare_and_write": true, 00:08:10.212 "abort": true, 00:08:10.212 "seek_hole": false, 00:08:10.212 "seek_data": false, 00:08:10.212 "copy": true, 00:08:10.212 "nvme_iov_md": false 00:08:10.212 }, 00:08:10.212 "memory_domains": [ 00:08:10.212 { 00:08:10.212 "dma_device_id": "system", 00:08:10.212 "dma_device_type": 1 00:08:10.212 } 00:08:10.212 ], 00:08:10.212 "driver_specific": { 00:08:10.212 "nvme": [ 00:08:10.212 { 00:08:10.212 "trid": { 00:08:10.212 "trtype": "TCP", 00:08:10.212 "adrfam": "IPv4", 00:08:10.212 "traddr": "10.0.0.2", 00:08:10.212 "trsvcid": "4420", 00:08:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:10.212 }, 00:08:10.212 "ctrlr_data": { 00:08:10.212 "cntlid": 1, 00:08:10.212 "vendor_id": "0x8086", 00:08:10.212 "model_number": "SPDK bdev Controller", 00:08:10.212 "serial_number": "SPDK0", 00:08:10.212 "firmware_revision": "25.01", 00:08:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:10.212 "oacs": { 00:08:10.212 "security": 0, 00:08:10.212 "format": 0, 00:08:10.212 "firmware": 0, 00:08:10.212 "ns_manage": 0 00:08:10.212 }, 00:08:10.212 "multi_ctrlr": true, 00:08:10.212 "ana_reporting": false 00:08:10.212 }, 00:08:10.212 "vs": { 00:08:10.212 "nvme_version": "1.3" 00:08:10.212 }, 00:08:10.212 "ns_data": { 00:08:10.212 "id": 1, 00:08:10.212 "can_share": true 00:08:10.212 } 00:08:10.212 } 00:08:10.212 ], 00:08:10.212 "mp_policy": "active_passive" 00:08:10.212 } 00:08:10.212 } 00:08:10.212 ] 00:08:10.212 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1894508 00:08:10.212 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:10.212 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:10.472 Running I/O for 10 seconds... 00:08:11.409 Latency(us) 00:08:11.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.409 Nvme0n1 : 1.00 21990.00 85.90 0.00 0.00 0.00 0.00 0.00 00:08:11.409 =================================================================================================================== 00:08:11.409 Total : 21990.00 85.90 0.00 0.00 0.00 0.00 0.00 00:08:11.409 00:08:12.346 11:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u eda18c91-43cb-4e42-9099-ee370cbc2e97 00:08:12.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.346 Nvme0n1 : 2.00 22179.00 86.64 0.00 0.00 0.00 0.00 0.00 00:08:12.346 =================================================================================================================== 00:08:12.346 Total : 22179.00 86.64 0.00 0.00 0.00 0.00 0.00 00:08:12.346 00:08:12.604 true 00:08:12.604 11:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda18c91-43cb-4e42-9099-ee370cbc2e97 00:08:12.605 11:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:12.605 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:12.605 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:12.605 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1894508 00:08:13.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.543 Nvme0n1 : 3.00 22223.33 86.81 0.00 0.00 0.00 0.00 0.00 00:08:13.543 =================================================================================================================== 00:08:13.543 Total : 22223.33 86.81 0.00 0.00 0.00 0.00 0.00 00:08:13.543 00:08:14.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.482 Nvme0n1 : 4.00 22335.50 87.25 0.00 0.00 0.00 0.00 0.00 00:08:14.482 =================================================================================================================== 00:08:14.482 Total : 22335.50 87.25 0.00 0.00 0.00 0.00 0.00 00:08:14.482 00:08:15.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.421 Nvme0n1 : 5.00 22382.00 87.43 0.00 0.00 0.00 0.00 0.00 00:08:15.421 =================================================================================================================== 00:08:15.421 Total : 22382.00 87.43 0.00 0.00 0.00 0.00 0.00 00:08:15.421 00:08:16.358 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.358 Nvme0n1 : 6.00 22450.33 87.70 0.00 0.00 0.00 0.00 0.00 00:08:16.358 =================================================================================================================== 00:08:16.358 Total : 22450.33 87.70 0.00 0.00 0.00 0.00 0.00 00:08:16.358 00:08:17.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.769 Nvme0n1 : 7.00 22506.00 87.91 0.00 0.00 0.00 0.00 0.00 00:08:17.769 =================================================================================================================== 00:08:17.769 Total : 22506.00 87.91 0.00 0.00 0.00 0.00 0.00 00:08:17.769 00:08:18.362 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.362 Nvme0n1 : 8.00 22542.75 88.06 0.00 0.00 0.00 0.00 0.00 00:08:18.362 =================================================================================================================== 00:08:18.362 Total : 22542.75 88.06 0.00 0.00 0.00 0.00 0.00 00:08:18.362 00:08:19.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.741 Nvme0n1 : 9.00 22578.44 88.20 0.00 0.00 0.00 0.00 0.00 00:08:19.741 =================================================================================================================== 00:08:19.741 Total : 22578.44 88.20 0.00 0.00 0.00 0.00 0.00 00:08:19.741 00:08:20.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.679 Nvme0n1 : 10.00 22604.60 88.30 0.00 0.00 0.00 0.00 0.00 00:08:20.679 =================================================================================================================== 00:08:20.679 Total : 22604.60 88.30 0.00 0.00 0.00 0.00 0.00 00:08:20.679 00:08:20.679 00:08:20.679 Latency(us) 00:08:20.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.679 Nvme0n1 : 10.01 22604.65 88.30 0.00 0.00 5658.63 2543.42 8176.40 00:08:20.679 =================================================================================================================== 00:08:20.679 Total : 22604.65 88.30 0.00 0.00 5658.63 2543.42 8176.40 00:08:20.679 { 00:08:20.679 "results": [ 00:08:20.679 { 00:08:20.679 "job": "Nvme0n1", 00:08:20.679 "core_mask": "0x2", 00:08:20.679 "workload": "randwrite", 00:08:20.679 "status": "finished", 00:08:20.679 "queue_depth": 128, 00:08:20.679 "io_size": 4096, 00:08:20.679 "runtime": 10.005285, 00:08:20.679 "iops": 22604.653440656613, 00:08:20.679 "mibps": 88.2994275025649, 00:08:20.679 "io_failed": 0, 00:08:20.679 "io_timeout": 0, 00:08:20.679 "avg_latency_us": 5658.628528510244, 00:08:20.679 "min_latency_us": 2543.4209523809523, 00:08:20.679 "max_latency_us": 8176.396190476191 00:08:20.679 } 00:08:20.679 ], 00:08:20.679 "core_count": 1 00:08:20.679 } 00:08:20.680 11:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1894398 00:08:20.680 11:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1894398 ']' 00:08:20.680 11:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1894398 00:08:20.680 11:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:20.680 11:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.680 11:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1894398 00:08:20.680 11:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:20.680 11:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:20.680 11:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1894398' 00:08:20.680 killing process with pid 1894398 00:08:20.680 11:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1894398 00:08:20.680 Received shutdown signal, test time was about 10.000000 seconds 00:08:20.680 00:08:20.680 Latency(us) 00:08:20.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.680 =================================================================================================================== 00:08:20.680 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:20.680 11:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1894398 00:08:20.680 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:20.939 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:21.198 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda18c91-43cb-4e42-9099-ee370cbc2e97 00:08:21.198 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:21.198 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:21.198 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:21.198 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1891137 00:08:21.198 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1891137 00:08:21.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1891137 Killed "${NVMF_APP[@]}" "$@" 00:08:21.457 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:21.457 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:21.457 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:21.457 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:21.457 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:21.457 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1896702 00:08:21.457 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1896702 00:08:21.457 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:21.457 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1896702 ']' 00:08:21.457 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.457 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.457 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.457 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.457 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:21.457 [2024-10-06 11:03:18.822638] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:08:21.457 [2024-10-06 11:03:18.822681] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.457 [2024-10-06 11:03:18.880094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.457 [2024-10-06 11:03:18.919668] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.457 [2024-10-06 11:03:18.919707] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.457 [2024-10-06 11:03:18.919714] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.457 [2024-10-06 11:03:18.919721] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.457 [2024-10-06 11:03:18.919725] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.457 [2024-10-06 11:03:18.920263] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.457 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.457 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:21.457 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:21.457 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:21.457 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:21.717 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.717 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:21.717 [2024-10-06 11:03:19.211793] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:21.717 [2024-10-06 11:03:19.211916] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:21.717 [2024-10-06 11:03:19.211944] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:21.717 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:21.717 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e9423fd2-53ca-4d12-9cff-b9b8b3280d96 00:08:21.717 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e9423fd2-53ca-4d12-9cff-b9b8b3280d96 00:08:21.717 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:21.717 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:21.717 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:21.717 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:21.717 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:21.977 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e9423fd2-53ca-4d12-9cff-b9b8b3280d96 -t 2000 00:08:22.237 [ 00:08:22.237 { 00:08:22.237 "name": "e9423fd2-53ca-4d12-9cff-b9b8b3280d96", 00:08:22.237 "aliases": [ 00:08:22.237 "lvs/lvol" 00:08:22.237 ], 00:08:22.237 "product_name": "Logical Volume", 00:08:22.237 "block_size": 4096, 00:08:22.237 "num_blocks": 38912, 00:08:22.237 "uuid": "e9423fd2-53ca-4d12-9cff-b9b8b3280d96", 00:08:22.237 "assigned_rate_limits": { 00:08:22.237 "rw_ios_per_sec": 0, 00:08:22.237 "rw_mbytes_per_sec": 0, 00:08:22.237 "r_mbytes_per_sec": 0, 00:08:22.237 "w_mbytes_per_sec": 0 00:08:22.237 }, 00:08:22.237 "claimed": false, 00:08:22.237 "zoned": false, 00:08:22.237 "supported_io_types": { 00:08:22.237 "read": true, 00:08:22.237 "write": true, 00:08:22.237 "unmap": true, 00:08:22.237 "flush": false, 00:08:22.237 "reset": true, 00:08:22.237 "nvme_admin": false, 00:08:22.237 "nvme_io": false, 00:08:22.237 "nvme_io_md": false, 00:08:22.237 "write_zeroes": true, 00:08:22.237 "zcopy": false, 00:08:22.237 "get_zone_info": false, 00:08:22.237 "zone_management": false, 00:08:22.237 "zone_append": false, 00:08:22.237 "compare": false, 00:08:22.237 "compare_and_write": false, 00:08:22.237 "abort": false, 00:08:22.237 "seek_hole": true, 00:08:22.237 "seek_data": true, 00:08:22.237 "copy": false, 00:08:22.237 "nvme_iov_md": false 00:08:22.237 }, 00:08:22.237 "driver_specific": { 00:08:22.237 "lvol": { 00:08:22.237 "lvol_store_uuid": "eda18c91-43cb-4e42-9099-ee370cbc2e97", 00:08:22.237 "base_bdev": "aio_bdev", 00:08:22.237 "thin_provision": false, 00:08:22.237 "num_allocated_clusters": 38, 00:08:22.237 "snapshot": false, 00:08:22.237 "clone": false, 00:08:22.237 "esnap_clone": false 00:08:22.237 } 00:08:22.237 } 00:08:22.237 } 00:08:22.237 ] 00:08:22.237 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:22.237 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda18c91-43cb-4e42-9099-ee370cbc2e97 00:08:22.237 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:22.237 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:22.237 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda18c91-43cb-4e42-9099-ee370cbc2e97 00:08:22.237 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:22.497 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:22.497 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:22.756 [2024-10-06 11:03:20.164660] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:22.756 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda18c91-43cb-4e42-9099-ee370cbc2e97 00:08:22.756 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:22.756 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda18c91-43cb-4e42-9099-ee370cbc2e97 00:08:22.756 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.756 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.756 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.756 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.756 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.756 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.756 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.756 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:22.756 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda18c91-43cb-4e42-9099-ee370cbc2e97 00:08:23.015 request: 00:08:23.015 { 00:08:23.015 "uuid": "eda18c91-43cb-4e42-9099-ee370cbc2e97", 00:08:23.015 "method": "bdev_lvol_get_lvstores", 00:08:23.015 "req_id": 1 00:08:23.015 } 00:08:23.015 Got JSON-RPC error response 00:08:23.015 response: 00:08:23.015 { 00:08:23.015 "code": -19, 00:08:23.015 "message": "No such device" 00:08:23.015 } 00:08:23.015 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:23.015 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.015 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:23.015 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.015 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:23.015 aio_bdev 00:08:23.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e9423fd2-53ca-4d12-9cff-b9b8b3280d96 00:08:23.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e9423fd2-53ca-4d12-9cff-b9b8b3280d96 00:08:23.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:23.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:23.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:23.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:23.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:23.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e9423fd2-53ca-4d12-9cff-b9b8b3280d96 -t 2000 00:08:23.534 [ 00:08:23.534 { 00:08:23.534 "name": "e9423fd2-53ca-4d12-9cff-b9b8b3280d96", 00:08:23.534 "aliases": [ 00:08:23.534 "lvs/lvol" 00:08:23.534 ], 00:08:23.534 "product_name": "Logical Volume", 00:08:23.534 "block_size": 4096, 00:08:23.534 "num_blocks": 38912, 00:08:23.534 "uuid": "e9423fd2-53ca-4d12-9cff-b9b8b3280d96", 00:08:23.534 "assigned_rate_limits": { 00:08:23.534 "rw_ios_per_sec": 0, 00:08:23.534 "rw_mbytes_per_sec": 0, 00:08:23.534 "r_mbytes_per_sec": 0, 00:08:23.534 "w_mbytes_per_sec": 0 00:08:23.534 }, 00:08:23.534 "claimed": false, 00:08:23.534 "zoned": false, 00:08:23.534 "supported_io_types": { 00:08:23.534 "read": true, 00:08:23.534 "write": true, 00:08:23.534 "unmap": true, 00:08:23.534 "flush": false, 00:08:23.534 "reset": true, 00:08:23.534 "nvme_admin": false, 00:08:23.534 "nvme_io": false, 00:08:23.534 "nvme_io_md": false, 00:08:23.534 "write_zeroes": true, 00:08:23.534 "zcopy": false, 00:08:23.534 "get_zone_info": false, 00:08:23.534 "zone_management": false, 00:08:23.534 "zone_append": false, 00:08:23.534 "compare": false, 00:08:23.534 "compare_and_write": false, 00:08:23.534 "abort": false, 00:08:23.534 "seek_hole": true, 00:08:23.534 "seek_data": true, 00:08:23.534 "copy": false, 00:08:23.534 "nvme_iov_md": false 00:08:23.534 }, 00:08:23.534 "driver_specific": { 00:08:23.534 "lvol": { 00:08:23.534 "lvol_store_uuid": "eda18c91-43cb-4e42-9099-ee370cbc2e97", 00:08:23.534 "base_bdev": "aio_bdev", 00:08:23.534 "thin_provision": false, 00:08:23.534 "num_allocated_clusters": 38, 00:08:23.534 "snapshot": false, 00:08:23.534 "clone": false, 00:08:23.534 "esnap_clone": false 00:08:23.534 } 00:08:23.534 } 00:08:23.534 } 00:08:23.534 ] 00:08:23.534 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:23.534 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda18c91-43cb-4e42-9099-ee370cbc2e97 00:08:23.534 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:23.793 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:23.794 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda18c91-43cb-4e42-9099-ee370cbc2e97 00:08:23.794 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:23.794 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:23.794 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e9423fd2-53ca-4d12-9cff-b9b8b3280d96 00:08:24.052 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eda18c91-43cb-4e42-9099-ee370cbc2e97 00:08:24.312 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:24.572 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.572 00:08:24.572 real 0m16.855s 00:08:24.572 user 0m43.424s 00:08:24.572 sys 0m4.063s 00:08:24.572 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.572 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:24.572 ************************************ 00:08:24.572 END TEST lvs_grow_dirty 00:08:24.572 ************************************ 00:08:24.572 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:24.572 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:24.572 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:24.572 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:24.572 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:24.572 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:24.572 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:24.572 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:24.572 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:24.572 nvmf_trace.0 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:24.572 rmmod nvme_tcp 00:08:24.572 rmmod nvme_fabrics 00:08:24.572 rmmod nvme_keyring 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1896702 ']' 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1896702 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1896702 ']' 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1896702 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1896702 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1896702' 00:08:24.572 killing process with pid 1896702 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1896702 00:08:24.572 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1896702 00:08:24.832 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:24.832 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:24.832 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:24.832 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:24.832 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:08:24.832 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:24.832 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:08:24.832 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:24.832 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:24.832 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.832 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.832 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:27.370 00:08:27.370 real 0m41.102s 00:08:27.370 user 1m4.091s 00:08:27.370 sys 0m9.943s 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:27.370 ************************************ 00:08:27.370 END TEST nvmf_lvs_grow 00:08:27.370 ************************************ 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:27.370 ************************************ 00:08:27.370 START TEST nvmf_bdev_io_wait 00:08:27.370 ************************************ 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:27.370 * Looking for test storage... 00:08:27.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.370 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:27.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.371 --rc genhtml_branch_coverage=1 00:08:27.371 --rc genhtml_function_coverage=1 00:08:27.371 --rc genhtml_legend=1 00:08:27.371 --rc geninfo_all_blocks=1 00:08:27.371 --rc geninfo_unexecuted_blocks=1 00:08:27.371 00:08:27.371 ' 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:27.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.371 --rc genhtml_branch_coverage=1 00:08:27.371 --rc genhtml_function_coverage=1 00:08:27.371 --rc genhtml_legend=1 00:08:27.371 --rc geninfo_all_blocks=1 00:08:27.371 --rc geninfo_unexecuted_blocks=1 00:08:27.371 00:08:27.371 ' 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:27.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.371 --rc genhtml_branch_coverage=1 00:08:27.371 --rc genhtml_function_coverage=1 00:08:27.371 --rc genhtml_legend=1 00:08:27.371 --rc geninfo_all_blocks=1 00:08:27.371 --rc geninfo_unexecuted_blocks=1 00:08:27.371 00:08:27.371 ' 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:27.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.371 --rc genhtml_branch_coverage=1 00:08:27.371 --rc genhtml_function_coverage=1 00:08:27.371 --rc genhtml_legend=1 00:08:27.371 --rc geninfo_all_blocks=1 00:08:27.371 --rc geninfo_unexecuted_blocks=1 00:08:27.371 00:08:27.371 ' 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:27.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:27.371 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:32.651 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:32.651 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:32.651 Found net devices under 0000:af:00.0: cvl_0_0 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.651 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:32.652 Found net devices under 0000:af:00.1: cvl_0_1 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:32.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:08:32.652 00:08:32.652 --- 10.0.0.2 ping statistics --- 00:08:32.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.652 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:08:32.652 00:08:32.652 --- 10.0.0.1 ping statistics --- 00:08:32.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.652 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1900799 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1900799 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1900799 ']' 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.652 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:32.652 [2024-10-06 11:03:29.936505] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:08:32.652 [2024-10-06 11:03:29.936549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.652 [2024-10-06 11:03:29.994172] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.652 [2024-10-06 11:03:30.037542] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.652 [2024-10-06 11:03:30.037587] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.652 [2024-10-06 11:03:30.037594] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.652 [2024-10-06 11:03:30.037600] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.652 [2024-10-06 11:03:30.037606] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.652 [2024-10-06 11:03:30.039110] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.652 [2024-10-06 11:03:30.039206] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.652 [2024-10-06 11:03:30.039304] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.652 [2024-10-06 11:03:30.039306] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.652 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.652 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:32.652 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:32.652 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:32.652 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:32.652 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.652 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:32.652 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.652 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:32.652 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.652 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:32.652 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.652 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:32.652 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.652 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:32.652 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.652 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:32.652 [2024-10-06 11:03:30.223626] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:32.913 Malloc0 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:32.913 [2024-10-06 11:03:30.288439] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1900920 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1900922 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:32.913 { 00:08:32.913 "params": { 00:08:32.913 "name": "Nvme$subsystem", 00:08:32.913 "trtype": "$TEST_TRANSPORT", 00:08:32.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:32.913 "adrfam": "ipv4", 00:08:32.913 "trsvcid": "$NVMF_PORT", 00:08:32.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:32.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:32.913 "hdgst": ${hdgst:-false}, 00:08:32.913 "ddgst": ${ddgst:-false} 00:08:32.913 }, 00:08:32.913 "method": "bdev_nvme_attach_controller" 00:08:32.913 } 00:08:32.913 EOF 00:08:32.913 )") 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1900924 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:32.913 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:32.913 { 00:08:32.913 "params": { 00:08:32.913 "name": "Nvme$subsystem", 00:08:32.914 "trtype": "$TEST_TRANSPORT", 00:08:32.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:32.914 "adrfam": "ipv4", 00:08:32.914 "trsvcid": "$NVMF_PORT", 00:08:32.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:32.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:32.914 "hdgst": ${hdgst:-false}, 00:08:32.914 "ddgst": ${ddgst:-false} 00:08:32.914 }, 00:08:32.914 "method": "bdev_nvme_attach_controller" 00:08:32.914 } 00:08:32.914 EOF 00:08:32.914 )") 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1900927 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:32.914 { 00:08:32.914 "params": { 00:08:32.914 "name": "Nvme$subsystem", 00:08:32.914 "trtype": "$TEST_TRANSPORT", 00:08:32.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:32.914 "adrfam": "ipv4", 00:08:32.914 "trsvcid": "$NVMF_PORT", 00:08:32.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:32.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:32.914 "hdgst": ${hdgst:-false}, 00:08:32.914 "ddgst": ${ddgst:-false} 00:08:32.914 }, 00:08:32.914 "method": "bdev_nvme_attach_controller" 00:08:32.914 } 00:08:32.914 EOF 00:08:32.914 )") 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:32.914 { 00:08:32.914 "params": { 00:08:32.914 "name": "Nvme$subsystem", 00:08:32.914 "trtype": "$TEST_TRANSPORT", 00:08:32.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:32.914 "adrfam": "ipv4", 00:08:32.914 "trsvcid": "$NVMF_PORT", 00:08:32.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:32.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:32.914 "hdgst": ${hdgst:-false}, 00:08:32.914 "ddgst": ${ddgst:-false} 00:08:32.914 }, 00:08:32.914 "method": "bdev_nvme_attach_controller" 00:08:32.914 } 00:08:32.914 EOF 00:08:32.914 )") 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1900920 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:32.914 "params": { 00:08:32.914 "name": "Nvme1", 00:08:32.914 "trtype": "tcp", 00:08:32.914 "traddr": "10.0.0.2", 00:08:32.914 "adrfam": "ipv4", 00:08:32.914 "trsvcid": "4420", 00:08:32.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:32.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:32.914 "hdgst": false, 00:08:32.914 "ddgst": false 00:08:32.914 }, 00:08:32.914 "method": "bdev_nvme_attach_controller" 00:08:32.914 }' 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:32.914 "params": { 00:08:32.914 "name": "Nvme1", 00:08:32.914 "trtype": "tcp", 00:08:32.914 "traddr": "10.0.0.2", 00:08:32.914 "adrfam": "ipv4", 00:08:32.914 "trsvcid": "4420", 00:08:32.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:32.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:32.914 "hdgst": false, 00:08:32.914 "ddgst": false 00:08:32.914 }, 00:08:32.914 "method": "bdev_nvme_attach_controller" 00:08:32.914 }' 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:32.914 "params": { 00:08:32.914 "name": "Nvme1", 00:08:32.914 "trtype": "tcp", 00:08:32.914 "traddr": "10.0.0.2", 00:08:32.914 "adrfam": "ipv4", 00:08:32.914 "trsvcid": "4420", 00:08:32.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:32.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:32.914 "hdgst": false, 00:08:32.914 "ddgst": false 00:08:32.914 }, 00:08:32.914 "method": "bdev_nvme_attach_controller" 00:08:32.914 }' 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:32.914 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:32.914 "params": { 00:08:32.914 "name": "Nvme1", 00:08:32.914 "trtype": "tcp", 00:08:32.914 "traddr": "10.0.0.2", 00:08:32.914 "adrfam": "ipv4", 00:08:32.914 "trsvcid": "4420", 00:08:32.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:32.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:32.914 "hdgst": false, 00:08:32.914 "ddgst": false 00:08:32.914 }, 00:08:32.914 "method": "bdev_nvme_attach_controller" 00:08:32.914 }' 00:08:32.914 [2024-10-06 11:03:30.337159] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:08:32.914 [2024-10-06 11:03:30.337208] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:32.914 [2024-10-06 11:03:30.338269] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:08:32.914 [2024-10-06 11:03:30.338308] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:32.914 [2024-10-06 11:03:30.338505] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:08:32.914 [2024-10-06 11:03:30.338541] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:32.914 [2024-10-06 11:03:30.344026] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:08:32.914 [2024-10-06 11:03:30.344087] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:33.174 [2024-10-06 11:03:30.514743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.174 [2024-10-06 11:03:30.545393] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:08:33.174 [2024-10-06 11:03:30.614406] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.174 [2024-10-06 11:03:30.644138] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:08:33.174 [2024-10-06 11:03:30.716026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.433 [2024-10-06 11:03:30.750205] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:08:33.433 [2024-10-06 11:03:30.757705] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.433 [2024-10-06 11:03:30.784980] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:08:33.692 Running I/O for 1 seconds... 00:08:33.693 Running I/O for 1 seconds... 00:08:33.693 Running I/O for 1 seconds... 00:08:33.693 Running I/O for 1 seconds... 00:08:34.631 12454.00 IOPS, 48.65 MiB/s 00:08:34.631 Latency(us) 00:08:34.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.631 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:34.631 Nvme1n1 : 1.01 12508.06 48.86 0.00 0.00 10199.90 5648.58 16477.62 00:08:34.631 =================================================================================================================== 00:08:34.631 Total : 12508.06 48.86 0.00 0.00 10199.90 5648.58 16477.62 00:08:34.631 254360.00 IOPS, 993.59 MiB/s 00:08:34.631 Latency(us) 00:08:34.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.631 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:34.631 Nvme1n1 : 1.00 253968.76 992.07 0.00 0.00 501.38 235.03 1513.57 00:08:34.631 =================================================================================================================== 00:08:34.631 Total : 253968.76 992.07 0.00 0.00 501.38 235.03 1513.57 00:08:34.890 10058.00 IOPS, 39.29 MiB/s 00:08:34.890 Latency(us) 00:08:34.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.890 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:34.890 Nvme1n1 : 1.01 10112.16 39.50 0.00 0.00 12608.31 6241.52 19848.05 00:08:34.890 =================================================================================================================== 00:08:34.890 Total : 10112.16 39.50 0.00 0.00 12608.31 6241.52 19848.05 00:08:34.890 10926.00 IOPS, 42.68 MiB/s 00:08:34.890 Latency(us) 00:08:34.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.890 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:34.890 Nvme1n1 : 1.01 11011.19 43.01 0.00 0.00 11595.20 3386.03 22594.32 00:08:34.890 =================================================================================================================== 00:08:34.890 Total : 11011.19 43.01 0.00 0.00 11595.20 3386.03 22594.32 00:08:34.890 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1900922 00:08:34.890 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1900924 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1900927 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.150 rmmod nvme_tcp 00:08:35.150 rmmod nvme_fabrics 00:08:35.150 rmmod nvme_keyring 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1900799 ']' 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1900799 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1900799 ']' 00:08:35.150 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1900799 00:08:35.151 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:35.151 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:35.151 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1900799 00:08:35.151 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:35.151 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:35.151 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1900799' 00:08:35.151 killing process with pid 1900799 00:08:35.151 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1900799 00:08:35.151 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1900799 00:08:35.410 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:35.410 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:35.410 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:35.410 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:35.410 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:08:35.410 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:08:35.410 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:35.411 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.411 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.411 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.411 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.411 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.316 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:37.317 00:08:37.317 real 0m10.403s 00:08:37.317 user 0m17.712s 00:08:37.317 sys 0m5.845s 00:08:37.317 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.317 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.317 ************************************ 00:08:37.317 END TEST nvmf_bdev_io_wait 00:08:37.317 ************************************ 00:08:37.577 11:03:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:37.577 11:03:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:37.577 11:03:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.577 11:03:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.577 ************************************ 00:08:37.577 START TEST nvmf_queue_depth 00:08:37.577 ************************************ 00:08:37.577 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:37.577 * Looking for test storage... 00:08:37.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.577 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:37.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.578 --rc genhtml_branch_coverage=1 00:08:37.578 --rc genhtml_function_coverage=1 00:08:37.578 --rc genhtml_legend=1 00:08:37.578 --rc geninfo_all_blocks=1 00:08:37.578 --rc geninfo_unexecuted_blocks=1 00:08:37.578 00:08:37.578 ' 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:37.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.578 --rc genhtml_branch_coverage=1 00:08:37.578 --rc genhtml_function_coverage=1 00:08:37.578 --rc genhtml_legend=1 00:08:37.578 --rc geninfo_all_blocks=1 00:08:37.578 --rc geninfo_unexecuted_blocks=1 00:08:37.578 00:08:37.578 ' 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:37.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.578 --rc genhtml_branch_coverage=1 00:08:37.578 --rc genhtml_function_coverage=1 00:08:37.578 --rc genhtml_legend=1 00:08:37.578 --rc geninfo_all_blocks=1 00:08:37.578 --rc geninfo_unexecuted_blocks=1 00:08:37.578 00:08:37.578 ' 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:37.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.578 --rc genhtml_branch_coverage=1 00:08:37.578 --rc genhtml_function_coverage=1 00:08:37.578 --rc genhtml_legend=1 00:08:37.578 --rc geninfo_all_blocks=1 00:08:37.578 --rc geninfo_unexecuted_blocks=1 00:08:37.578 00:08:37.578 ' 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.578 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.838 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:37.838 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:37.838 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:37.838 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:37.838 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:37.838 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.838 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:37.838 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:37.838 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:37.838 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.838 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.838 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.838 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:37.838 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:37.838 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.838 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:44.412 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:44.412 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.412 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:44.413 Found net devices under 0000:af:00.0: cvl_0_0 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:44.413 Found net devices under 0000:af:00.1: cvl_0_1 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:44.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:08:44.413 00:08:44.413 --- 10.0.0.2 ping statistics --- 00:08:44.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.413 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:08:44.413 00:08:44.413 --- 10.0.0.1 ping statistics --- 00:08:44.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.413 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:44.413 11:03:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1904872 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1904872 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1904872 ']' 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.413 [2024-10-06 11:03:41.068633] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:08:44.413 [2024-10-06 11:03:41.068680] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.413 [2024-10-06 11:03:41.129760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.413 [2024-10-06 11:03:41.168862] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.413 [2024-10-06 11:03:41.168903] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.413 [2024-10-06 11:03:41.168910] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.413 [2024-10-06 11:03:41.168916] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.413 [2024-10-06 11:03:41.168921] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.413 [2024-10-06 11:03:41.169433] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.413 [2024-10-06 11:03:41.298583] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:44.413 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.414 Malloc0 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.414 [2024-10-06 11:03:41.352033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1904897 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1904897 /var/tmp/bdevperf.sock 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1904897 ']' 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:44.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.414 [2024-10-06 11:03:41.399946] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:08:44.414 [2024-10-06 11:03:41.399988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1904897 ] 00:08:44.414 [2024-10-06 11:03:41.454281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.414 [2024-10-06 11:03:41.493602] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.414 NVMe0n1 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.414 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:44.414 Running I/O for 10 seconds... 00:08:54.677 11688.00 IOPS, 45.66 MiB/s 12078.00 IOPS, 47.18 MiB/s 12179.67 IOPS, 47.58 MiB/s 12127.00 IOPS, 47.37 MiB/s 12192.40 IOPS, 47.63 MiB/s 12276.00 IOPS, 47.95 MiB/s 12286.86 IOPS, 48.00 MiB/s 12330.38 IOPS, 48.17 MiB/s 12330.00 IOPS, 48.16 MiB/s 12373.20 IOPS, 48.33 MiB/s 00:08:54.677 Latency(us) 00:08:54.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.677 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:54.677 Verification LBA range: start 0x0 length 0x4000 00:08:54.677 NVMe0n1 : 10.07 12382.23 48.37 0.00 0.00 82417.80 18849.40 55175.07 00:08:54.677 =================================================================================================================== 00:08:54.677 Total : 12382.23 48.37 0.00 0.00 82417.80 18849.40 55175.07 00:08:54.677 { 00:08:54.677 "results": [ 00:08:54.677 { 00:08:54.677 "job": "NVMe0n1", 00:08:54.677 "core_mask": "0x1", 00:08:54.677 "workload": "verify", 00:08:54.677 "status": "finished", 00:08:54.677 "verify_range": { 00:08:54.677 "start": 0, 00:08:54.677 "length": 16384 00:08:54.677 }, 00:08:54.677 "queue_depth": 1024, 00:08:54.677 "io_size": 4096, 00:08:54.677 "runtime": 10.065876, 00:08:54.677 "iops": 12382.230816274709, 00:08:54.677 "mibps": 48.36808912607308, 00:08:54.677 "io_failed": 0, 00:08:54.677 "io_timeout": 0, 00:08:54.677 "avg_latency_us": 82417.80156054218, 00:08:54.677 "min_latency_us": 18849.401904761904, 00:08:54.677 "max_latency_us": 55175.07047619048 00:08:54.677 } 00:08:54.677 ], 00:08:54.677 "core_count": 1 00:08:54.677 } 00:08:54.677 11:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1904897 00:08:54.677 11:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1904897 ']' 00:08:54.677 11:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1904897 00:08:54.677 11:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:54.677 11:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:54.677 11:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1904897 00:08:54.677 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:54.677 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:54.677 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1904897' 00:08:54.677 killing process with pid 1904897 00:08:54.677 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1904897 00:08:54.677 Received shutdown signal, test time was about 10.000000 seconds 00:08:54.677 00:08:54.677 Latency(us) 00:08:54.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.677 =================================================================================================================== 00:08:54.677 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:54.677 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1904897 00:08:54.677 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:54.677 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:54.677 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:54.677 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:54.677 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.677 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:54.677 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.677 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.677 rmmod nvme_tcp 00:08:54.677 rmmod nvme_fabrics 00:08:54.677 rmmod nvme_keyring 00:08:54.937 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.937 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:54.937 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:54.937 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1904872 ']' 00:08:54.937 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1904872 00:08:54.937 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1904872 ']' 00:08:54.937 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1904872 00:08:54.937 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:54.937 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:54.937 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1904872 00:08:54.937 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:54.937 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:54.937 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1904872' 00:08:54.937 killing process with pid 1904872 00:08:54.937 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1904872 00:08:54.937 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1904872 00:08:55.196 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:55.197 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:55.197 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:55.197 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:55.197 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:08:55.197 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:55.197 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:08:55.197 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:55.197 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:55.197 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.197 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.197 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.105 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:57.105 00:08:57.105 real 0m19.654s 00:08:57.105 user 0m23.089s 00:08:57.105 sys 0m6.011s 00:08:57.105 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.105 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.105 ************************************ 00:08:57.105 END TEST nvmf_queue_depth 00:08:57.105 ************************************ 00:08:57.105 11:03:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:57.105 11:03:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:57.105 11:03:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.105 11:03:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.105 ************************************ 00:08:57.105 START TEST nvmf_target_multipath 00:08:57.105 ************************************ 00:08:57.105 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:57.365 * Looking for test storage... 00:08:57.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.365 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:57.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.366 --rc genhtml_branch_coverage=1 00:08:57.366 --rc genhtml_function_coverage=1 00:08:57.366 --rc genhtml_legend=1 00:08:57.366 --rc geninfo_all_blocks=1 00:08:57.366 --rc geninfo_unexecuted_blocks=1 00:08:57.366 00:08:57.366 ' 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:57.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.366 --rc genhtml_branch_coverage=1 00:08:57.366 --rc genhtml_function_coverage=1 00:08:57.366 --rc genhtml_legend=1 00:08:57.366 --rc geninfo_all_blocks=1 00:08:57.366 --rc geninfo_unexecuted_blocks=1 00:08:57.366 00:08:57.366 ' 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:57.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.366 --rc genhtml_branch_coverage=1 00:08:57.366 --rc genhtml_function_coverage=1 00:08:57.366 --rc genhtml_legend=1 00:08:57.366 --rc geninfo_all_blocks=1 00:08:57.366 --rc geninfo_unexecuted_blocks=1 00:08:57.366 00:08:57.366 ' 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:57.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.366 --rc genhtml_branch_coverage=1 00:08:57.366 --rc genhtml_function_coverage=1 00:08:57.366 --rc genhtml_legend=1 00:08:57.366 --rc geninfo_all_blocks=1 00:08:57.366 --rc geninfo_unexecuted_blocks=1 00:08:57.366 00:08:57.366 ' 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:57.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:57.366 11:03:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:02.736 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:02.736 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:02.736 Found net devices under 0000:af:00.0: cvl_0_0 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:02.736 Found net devices under 0000:af:00.1: cvl_0_1 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.736 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.737 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:02.737 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.737 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.737 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.737 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:02.737 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:02.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:09:02.737 00:09:02.737 --- 10.0.0.2 ping statistics --- 00:09:02.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.737 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:09:02.737 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:09:02.737 00:09:02.737 --- 10.0.0.1 ping statistics --- 00:09:02.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.737 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:09:02.737 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.737 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:09:02.737 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:02.737 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.737 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:02.737 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:02.737 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.737 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:02.737 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:02.996 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:02.996 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:02.996 only one NIC for nvmf test 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:02.997 rmmod nvme_tcp 00:09:02.997 rmmod nvme_fabrics 00:09:02.997 rmmod nvme_keyring 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.997 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.903 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:04.903 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:04.903 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:04.903 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:04.903 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:04.903 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:04.903 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:04.903 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:04.903 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:05.162 00:09:05.162 real 0m7.851s 00:09:05.162 user 0m1.695s 00:09:05.162 sys 0m4.139s 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:05.162 ************************************ 00:09:05.162 END TEST nvmf_target_multipath 00:09:05.162 ************************************ 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.162 ************************************ 00:09:05.162 START TEST nvmf_zcopy 00:09:05.162 ************************************ 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:05.162 * Looking for test storage... 00:09:05.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:09:05.162 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:05.422 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:05.422 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.422 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.422 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.422 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.422 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.422 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.422 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.422 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.422 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.422 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.422 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.422 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:05.422 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:05.422 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.422 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:05.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.423 --rc genhtml_branch_coverage=1 00:09:05.423 --rc genhtml_function_coverage=1 00:09:05.423 --rc genhtml_legend=1 00:09:05.423 --rc geninfo_all_blocks=1 00:09:05.423 --rc geninfo_unexecuted_blocks=1 00:09:05.423 00:09:05.423 ' 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:05.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.423 --rc genhtml_branch_coverage=1 00:09:05.423 --rc genhtml_function_coverage=1 00:09:05.423 --rc genhtml_legend=1 00:09:05.423 --rc geninfo_all_blocks=1 00:09:05.423 --rc geninfo_unexecuted_blocks=1 00:09:05.423 00:09:05.423 ' 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:05.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.423 --rc genhtml_branch_coverage=1 00:09:05.423 --rc genhtml_function_coverage=1 00:09:05.423 --rc genhtml_legend=1 00:09:05.423 --rc geninfo_all_blocks=1 00:09:05.423 --rc geninfo_unexecuted_blocks=1 00:09:05.423 00:09:05.423 ' 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:05.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.423 --rc genhtml_branch_coverage=1 00:09:05.423 --rc genhtml_function_coverage=1 00:09:05.423 --rc genhtml_legend=1 00:09:05.423 --rc geninfo_all_blocks=1 00:09:05.423 --rc geninfo_unexecuted_blocks=1 00:09:05.423 00:09:05.423 ' 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:05.423 11:04:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:10.702 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:10.702 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:10.702 Found net devices under 0000:af:00.0: cvl_0_0 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:10.702 Found net devices under 0000:af:00.1: cvl_0_1 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:10.702 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:10.703 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:10.703 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:10.703 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:10.703 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:10.703 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:10.703 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:10.703 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:10.703 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:10.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:09:10.703 00:09:10.703 --- 10.0.0.2 ping statistics --- 00:09:10.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.703 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:09:10.703 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:10.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:09:10.703 00:09:10.703 --- 10.0.0.1 ping statistics --- 00:09:10.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.703 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:09:10.703 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.703 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:09:10.703 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:10.703 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.703 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:10.703 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:10.703 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.703 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:10.703 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:10.961 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:10.961 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:10.961 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:10.961 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:10.961 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1913625 00:09:10.961 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1913625 00:09:10.961 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:10.961 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1913625 ']' 00:09:10.961 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.961 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.961 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.961 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.961 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:10.961 [2024-10-06 11:04:08.358326] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:09:10.961 [2024-10-06 11:04:08.358367] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.961 [2024-10-06 11:04:08.413728] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.961 [2024-10-06 11:04:08.450247] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.961 [2024-10-06 11:04:08.450290] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.961 [2024-10-06 11:04:08.450298] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.961 [2024-10-06 11:04:08.450303] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.961 [2024-10-06 11:04:08.450308] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.961 [2024-10-06 11:04:08.450859] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.220 [2024-10-06 11:04:08.586882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.220 [2024-10-06 11:04:08.603045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.220 malloc0 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:11.220 { 00:09:11.220 "params": { 00:09:11.220 "name": "Nvme$subsystem", 00:09:11.220 "trtype": "$TEST_TRANSPORT", 00:09:11.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.220 "adrfam": "ipv4", 00:09:11.220 "trsvcid": "$NVMF_PORT", 00:09:11.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.220 "hdgst": ${hdgst:-false}, 00:09:11.220 "ddgst": ${ddgst:-false} 00:09:11.220 }, 00:09:11.220 "method": "bdev_nvme_attach_controller" 00:09:11.220 } 00:09:11.220 EOF 00:09:11.220 )") 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:11.220 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:11.220 "params": { 00:09:11.220 "name": "Nvme1", 00:09:11.220 "trtype": "tcp", 00:09:11.220 "traddr": "10.0.0.2", 00:09:11.220 "adrfam": "ipv4", 00:09:11.220 "trsvcid": "4420", 00:09:11.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.220 "hdgst": false, 00:09:11.220 "ddgst": false 00:09:11.220 }, 00:09:11.220 "method": "bdev_nvme_attach_controller" 00:09:11.220 }' 00:09:11.220 [2024-10-06 11:04:08.691419] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:09:11.220 [2024-10-06 11:04:08.691463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1913649 ] 00:09:11.220 [2024-10-06 11:04:08.746134] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.220 [2024-10-06 11:04:08.785426] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.788 Running I/O for 10 seconds... 00:09:21.606 8549.00 IOPS, 66.79 MiB/s 8616.50 IOPS, 67.32 MiB/s 8665.33 IOPS, 67.70 MiB/s 8699.00 IOPS, 67.96 MiB/s 8694.80 IOPS, 67.93 MiB/s 8701.67 IOPS, 67.98 MiB/s 8712.86 IOPS, 68.07 MiB/s 8695.62 IOPS, 67.93 MiB/s 8706.22 IOPS, 68.02 MiB/s 8711.70 IOPS, 68.06 MiB/s 00:09:21.606 Latency(us) 00:09:21.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.606 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:21.606 Verification LBA range: start 0x0 length 0x1000 00:09:21.606 Nvme1n1 : 10.01 8714.96 68.09 0.00 0.00 14646.53 2059.70 23343.30 00:09:21.606 =================================================================================================================== 00:09:21.606 Total : 8714.96 68.09 0.00 0.00 14646.53 2059.70 23343.30 00:09:21.866 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:21.866 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:21.866 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1915433 00:09:21.866 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:21.866 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:21.866 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:21.866 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:21.866 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:21.866 { 00:09:21.866 "params": { 00:09:21.866 "name": "Nvme$subsystem", 00:09:21.866 "trtype": "$TEST_TRANSPORT", 00:09:21.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:21.866 "adrfam": "ipv4", 00:09:21.866 "trsvcid": "$NVMF_PORT", 00:09:21.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:21.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:21.866 "hdgst": ${hdgst:-false}, 00:09:21.866 "ddgst": ${ddgst:-false} 00:09:21.866 }, 00:09:21.866 "method": "bdev_nvme_attach_controller" 00:09:21.866 } 00:09:21.866 EOF 00:09:21.866 )") 00:09:21.866 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:21.866 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:21.866 [2024-10-06 11:04:19.275419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.275451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:21.866 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:21.866 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:21.866 "params": { 00:09:21.866 "name": "Nvme1", 00:09:21.866 "trtype": "tcp", 00:09:21.866 "traddr": "10.0.0.2", 00:09:21.866 "adrfam": "ipv4", 00:09:21.866 "trsvcid": "4420", 00:09:21.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:21.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:21.866 "hdgst": false, 00:09:21.866 "ddgst": false 00:09:21.866 }, 00:09:21.866 "method": "bdev_nvme_attach_controller" 00:09:21.866 }' 00:09:21.866 [2024-10-06 11:04:19.283406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.283420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.291423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.291433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.299143] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:09:21.866 [2024-10-06 11:04:19.299186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1915433 ] 00:09:21.866 [2024-10-06 11:04:19.299445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.299455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.307466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.307475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.315489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.315498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.323508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.323518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.331545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.331554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.339551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.339560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.347572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.347581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.348276] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.866 [2024-10-06 11:04:19.355595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.355607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.363616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.363629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.371639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.371659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.379658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.379675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.387682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.387693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.388009] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.866 [2024-10-06 11:04:19.395705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.395719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.403731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.403748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.411748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.411760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.419769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.419780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.427787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.427797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.866 [2024-10-06 11:04:19.435808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.866 [2024-10-06 11:04:19.435818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.443832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.443842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.451849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.451860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.459869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.459879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.467891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.467900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.475926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.475946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.483939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.483952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.491960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.491971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.499982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.499995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.508000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.508012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.516021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.516034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.524040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.524049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.532286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.532303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.540089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.540102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 Running I/O for 5 seconds... 00:09:22.126 [2024-10-06 11:04:19.548107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.548116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.560007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.560026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.567657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.567676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.576777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.576796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.585596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.585614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.594166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.594184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.602738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.602756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.611724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.611741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.620307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.620324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.628894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.628911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.637407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.637424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.645980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.645998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.654748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.654767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.664327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.664345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.673559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.673576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.682838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.682856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.691373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.691395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.126 [2024-10-06 11:04:19.699880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.126 [2024-10-06 11:04:19.699898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.708389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.708407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.717668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.717687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.726557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.726575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.736271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.736290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.743053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.743074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.754172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.754191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.762868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.762885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.771373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.771391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.779781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.779798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.788346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.788364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.797584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.797602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.806233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.806251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.815299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.815316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.824372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.824390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.832903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.832920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.842111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.842128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.851163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.851180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.860338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.860360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.869499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.869517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.878067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.386 [2024-10-06 11:04:19.878084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.386 [2024-10-06 11:04:19.884915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.387 [2024-10-06 11:04:19.884932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.387 [2024-10-06 11:04:19.895219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.387 [2024-10-06 11:04:19.895236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.387 [2024-10-06 11:04:19.903840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.387 [2024-10-06 11:04:19.903857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.387 [2024-10-06 11:04:19.912532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.387 [2024-10-06 11:04:19.912549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.387 [2024-10-06 11:04:19.921557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.387 [2024-10-06 11:04:19.921574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.387 [2024-10-06 11:04:19.929965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.387 [2024-10-06 11:04:19.929982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.387 [2024-10-06 11:04:19.939207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.387 [2024-10-06 11:04:19.939225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.387 [2024-10-06 11:04:19.948284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.387 [2024-10-06 11:04:19.948301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.387 [2024-10-06 11:04:19.956852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.387 [2024-10-06 11:04:19.956869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.646 [2024-10-06 11:04:19.965793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.646 [2024-10-06 11:04:19.965812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.646 [2024-10-06 11:04:19.975259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.646 [2024-10-06 11:04:19.975277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.646 [2024-10-06 11:04:19.984437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.646 [2024-10-06 11:04:19.984454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.646 [2024-10-06 11:04:19.993312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.646 [2024-10-06 11:04:19.993330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.646 [2024-10-06 11:04:20.001833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.646 [2024-10-06 11:04:20.001852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.646 [2024-10-06 11:04:20.010454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.646 [2024-10-06 11:04:20.010471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.646 [2024-10-06 11:04:20.019561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.646 [2024-10-06 11:04:20.019579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.646 [2024-10-06 11:04:20.027245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.646 [2024-10-06 11:04:20.027271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.646 [2024-10-06 11:04:20.038039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.646 [2024-10-06 11:04:20.038057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.046550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.046567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.055706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.055726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.064137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.064155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.072823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.072841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.082026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.082046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.090721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.090740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.099836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.099856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.108937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.108956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.118172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.118191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.127408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.127427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.136449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.136467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.145889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.145906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.154577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.154595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.163972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.163992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.172730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.172749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.181355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.181372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.190686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.190704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.199747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.199770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.208810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.208828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.647 [2024-10-06 11:04:20.218108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.647 [2024-10-06 11:04:20.218126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.227490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.227508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.236856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.236875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.246178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.246196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.255479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.255496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.264748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.264766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.273453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.273471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.281985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.282008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.291244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.291262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.300365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.300384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.308818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.308837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.317542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.317560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.326347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.326366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.335627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.335648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.344407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.344426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.353457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.353476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.362621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.362639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.371820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.371838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.380949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.380968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.390154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.390172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.399425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.399443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.408147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.408165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.417312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.417329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.426669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.426687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.436007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.436025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.444737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.444756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.453931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.453950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.462426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.462444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.471612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.471630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.907 [2024-10-06 11:04:20.480971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.907 [2024-10-06 11:04:20.480990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.490066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.490084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.498564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.498582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.507805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.507823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.517008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.517026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.525548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.525566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.534117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.534135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.542861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.542878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 16557.00 IOPS, 129.35 MiB/s [2024-10-06 11:04:20.551909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.551926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.561222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.561239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.570218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.570236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.579533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.579551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.588205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.588222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.596660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.596678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.605557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.605575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.614247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.614265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.622782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.622800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.631300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.631317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.640660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.640677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.649134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.649151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.658456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.658474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.666974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.666992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.675843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.168 [2024-10-06 11:04:20.675860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.168 [2024-10-06 11:04:20.683940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.169 [2024-10-06 11:04:20.683957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.169 [2024-10-06 11:04:20.692458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.169 [2024-10-06 11:04:20.692476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.169 [2024-10-06 11:04:20.699354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.169 [2024-10-06 11:04:20.699375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.169 [2024-10-06 11:04:20.710023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.169 [2024-10-06 11:04:20.710040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.169 [2024-10-06 11:04:20.718780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.169 [2024-10-06 11:04:20.718798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.169 [2024-10-06 11:04:20.727605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.169 [2024-10-06 11:04:20.727623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.169 [2024-10-06 11:04:20.736182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.169 [2024-10-06 11:04:20.736199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.429 [2024-10-06 11:04:20.744896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.429 [2024-10-06 11:04:20.744914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.429 [2024-10-06 11:04:20.751992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.429 [2024-10-06 11:04:20.752009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.429 [2024-10-06 11:04:20.761730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.429 [2024-10-06 11:04:20.761747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.429 [2024-10-06 11:04:20.770719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.429 [2024-10-06 11:04:20.770736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.429 [2024-10-06 11:04:20.779347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.779364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.788001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.788019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.796621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.796638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.805110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.805127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.814584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.814602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.823718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.823736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.832132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.832149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.840734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.840751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.849392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.849409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.858277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.858295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.867474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.867495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.876324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.876342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.885580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.885597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.892389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.892406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.903443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.903461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.912495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.912512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.921138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.921155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.930217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.930234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.939205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.939222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.947782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.947800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.957236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.957254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.965728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.965746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.974972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.974989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.983434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.983452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:20.992516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:20.992534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.430 [2024-10-06 11:04:21.001854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.430 [2024-10-06 11:04:21.001872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.010127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.010146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.019410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.019428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.028396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.028414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.037583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.037605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.046607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.046625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.055840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.055857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.065292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.065310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.074564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.074582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.083862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.083881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.092526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.092544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.101358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.101376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.110612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.110629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.119145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.119162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.128469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.128487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.137081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.137097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.146267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.146284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.154844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.154862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.163201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.163219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.172004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.172022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.181049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.181075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.189668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.189687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.198797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.198816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.207433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.207454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.216461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.216478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.225608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.225625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.234179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.234196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.242618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.242635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.251143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.251160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.690 [2024-10-06 11:04:21.260447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.690 [2024-10-06 11:04:21.260465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.269696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.269714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.278704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.278722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.287209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.287226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.296077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.296096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.304627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.304644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.313253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.313271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.321939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.321957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.330904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.330921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.340156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.340173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.349487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.349504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.357926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.357942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.366519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.366536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.374901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.374917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.384131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.384148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.393386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.393404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.400414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.400431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.410785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.410802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.419335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.419352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.428055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.428088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.437268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.437285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.445819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.445836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.950 [2024-10-06 11:04:21.454990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.950 [2024-10-06 11:04:21.455008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.951 [2024-10-06 11:04:21.464270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.951 [2024-10-06 11:04:21.464288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.951 [2024-10-06 11:04:21.473236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.951 [2024-10-06 11:04:21.473255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.951 [2024-10-06 11:04:21.482412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.951 [2024-10-06 11:04:21.482431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.951 [2024-10-06 11:04:21.490886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.951 [2024-10-06 11:04:21.490903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.951 [2024-10-06 11:04:21.499788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.951 [2024-10-06 11:04:21.499806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.951 [2024-10-06 11:04:21.509464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.951 [2024-10-06 11:04:21.509482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.951 [2024-10-06 11:04:21.518032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.951 [2024-10-06 11:04:21.518050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.526689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.526708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.536047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.536074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.545306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.545324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 16661.50 IOPS, 130.17 MiB/s [2024-10-06 11:04:21.554547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.554565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.563387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.563405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.572573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.572591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.581271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.581290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.590562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.590581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.598895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.598913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.608132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.608149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.617021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.617040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.626291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.626310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.634725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.634743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.643833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.643850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.653056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.653081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.661089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.661106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.670114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.670132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.679187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.679205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.688358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.688376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.695173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.695191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.706235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.706254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.715083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.715101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.724249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.724267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.733368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.733386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.741848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.741866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.750406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.750423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.759108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.759126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.768202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.768221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.211 [2024-10-06 11:04:21.777129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.211 [2024-10-06 11:04:21.777146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.786405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.786425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.794972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.794989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.801802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.801819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.812146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.812164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.820909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.820926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.830235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.830253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.839371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.839389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.848509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.848526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.858222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.858239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.867086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.867103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.875854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.875876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.885054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.885077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.894018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.894035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.903050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.903073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.912198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.912215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.921359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.921377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.929777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.929795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.938907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.938924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.947560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.947578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.956042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.956065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.965582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.965599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.471 [2024-10-06 11:04:21.974105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.471 [2024-10-06 11:04:21.974122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.472 [2024-10-06 11:04:21.982705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.472 [2024-10-06 11:04:21.982722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.472 [2024-10-06 11:04:21.991435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.472 [2024-10-06 11:04:21.991452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.472 [2024-10-06 11:04:22.000026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.472 [2024-10-06 11:04:22.000044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.472 [2024-10-06 11:04:22.008690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.472 [2024-10-06 11:04:22.008708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.472 [2024-10-06 11:04:22.018023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.472 [2024-10-06 11:04:22.018041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.472 [2024-10-06 11:04:22.027260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.472 [2024-10-06 11:04:22.027277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.472 [2024-10-06 11:04:22.036100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.472 [2024-10-06 11:04:22.036117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.472 [2024-10-06 11:04:22.044920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.472 [2024-10-06 11:04:22.044942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.053906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.053924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.062394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.062411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.071180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.071198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.080257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.080274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.088911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.088928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.098943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.098960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.107830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.107848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.116136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.116154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.124663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.124680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.133898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.133917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.142873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.142890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.152249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.152267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.160702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.160719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.169131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.169148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.178088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.178105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.186400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.186417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.194849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.194866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.204090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.204110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.213301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.213324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.732 [2024-10-06 11:04:22.221860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.732 [2024-10-06 11:04:22.221879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.733 [2024-10-06 11:04:22.230468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.733 [2024-10-06 11:04:22.230485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.733 [2024-10-06 11:04:22.239100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.733 [2024-10-06 11:04:22.239117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.733 [2024-10-06 11:04:22.247762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.733 [2024-10-06 11:04:22.247779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.733 [2024-10-06 11:04:22.257069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.733 [2024-10-06 11:04:22.257086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.733 [2024-10-06 11:04:22.266075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.733 [2024-10-06 11:04:22.266092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.733 [2024-10-06 11:04:22.275268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.733 [2024-10-06 11:04:22.275286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.733 [2024-10-06 11:04:22.284281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.733 [2024-10-06 11:04:22.284298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.733 [2024-10-06 11:04:22.293373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.733 [2024-10-06 11:04:22.293391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.733 [2024-10-06 11:04:22.302673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.733 [2024-10-06 11:04:22.302692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.312034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.312053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.320648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.320665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.329068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.329085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.337754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.337772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.346302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.346319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.355022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.355040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.364237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.364255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.373474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.373490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.382577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.382599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.391695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.391712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.400835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.400853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.410556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.410574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.419155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.419173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.427660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.427677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.436775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.436793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.445873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.445895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.452665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.452683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.462918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.462937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.471843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.471861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.480426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.480443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.488850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.488867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.497454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.497472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.504209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.504226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.515223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.515240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.524079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.524097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.532682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.532700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.541881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.541898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.993 [2024-10-06 11:04:22.550408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.993 [2024-10-06 11:04:22.550425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.994 16684.00 IOPS, 130.34 MiB/s [2024-10-06 11:04:22.559396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.994 [2024-10-06 11:04:22.559414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.568518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.568537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.577541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.577558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.586549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.586567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.595647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.595665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.604321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.604339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.614127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.614146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.623318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.623336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.632289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.632306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.640726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.640743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.649174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.649191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.657592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.657609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.666483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.666500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.673494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.673511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.683943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.683961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.692620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.692638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.701457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.701474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.709892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.709909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.718100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.718116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.726989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.727006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.736073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.253 [2024-10-06 11:04:22.736090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.253 [2024-10-06 11:04:22.745139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.254 [2024-10-06 11:04:22.745157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.254 [2024-10-06 11:04:22.754224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.254 [2024-10-06 11:04:22.754241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.254 [2024-10-06 11:04:22.763052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.254 [2024-10-06 11:04:22.763084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.254 [2024-10-06 11:04:22.771645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.254 [2024-10-06 11:04:22.771662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.254 [2024-10-06 11:04:22.780265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.254 [2024-10-06 11:04:22.780283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.254 [2024-10-06 11:04:22.788694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.254 [2024-10-06 11:04:22.788711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.254 [2024-10-06 11:04:22.797760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.254 [2024-10-06 11:04:22.797777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.254 [2024-10-06 11:04:22.806998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.254 [2024-10-06 11:04:22.807016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.254 [2024-10-06 11:04:22.815521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.254 [2024-10-06 11:04:22.815540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.254 [2024-10-06 11:04:22.824569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.254 [2024-10-06 11:04:22.824587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.833756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.833775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.842879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.842898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.851625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.851643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.860163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.860181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.869495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.869513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.878140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.878158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.886783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.886801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.895924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.895942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.904440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.904458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.913316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.913333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.921818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.921835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.930756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.930774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.939869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.939887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.949027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.949045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.958195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.958212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.966672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.966689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.975190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.975207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.984135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.984152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:22.992786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:22.992803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:23.001886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:23.001904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:23.010492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:23.010511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:23.020138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:23.020158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:23.028713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:23.028732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.513 [2024-10-06 11:04:23.038019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.513 [2024-10-06 11:04:23.038038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.514 [2024-10-06 11:04:23.047330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.514 [2024-10-06 11:04:23.047352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.514 [2024-10-06 11:04:23.056563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.514 [2024-10-06 11:04:23.056581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.514 [2024-10-06 11:04:23.065717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.514 [2024-10-06 11:04:23.065736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.514 [2024-10-06 11:04:23.074382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.514 [2024-10-06 11:04:23.074400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.514 [2024-10-06 11:04:23.083520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.514 [2024-10-06 11:04:23.083538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.773 [2024-10-06 11:04:23.092684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.773 [2024-10-06 11:04:23.092713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.773 [2024-10-06 11:04:23.101852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.773 [2024-10-06 11:04:23.101869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.773 [2024-10-06 11:04:23.110567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.773 [2024-10-06 11:04:23.110585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.773 [2024-10-06 11:04:23.119952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.773 [2024-10-06 11:04:23.119971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.773 [2024-10-06 11:04:23.129351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.773 [2024-10-06 11:04:23.129369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.773 [2024-10-06 11:04:23.138557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.773 [2024-10-06 11:04:23.138576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.773 [2024-10-06 11:04:23.146994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.773 [2024-10-06 11:04:23.147012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.773 [2024-10-06 11:04:23.156174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.773 [2024-10-06 11:04:23.156192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.773 [2024-10-06 11:04:23.165139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.773 [2024-10-06 11:04:23.165156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.773 [2024-10-06 11:04:23.171892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.773 [2024-10-06 11:04:23.171909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.182979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.182997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.191883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.191901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.201104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.201121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.209647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.209664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.219264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.219286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.227917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.227936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.236985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.237004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.246138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.246156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.255488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.255506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.264629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.264647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.273146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.273163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.282235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.282253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.291067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.291085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.300108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.300125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.309234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.309252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.318482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.318499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.326943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.326960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.335962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.335980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.774 [2024-10-06 11:04:23.344612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.774 [2024-10-06 11:04:23.344630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.353808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.353826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.360733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.360751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.371967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.371984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.380971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.380989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.390209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.390230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.399451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.399469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.408654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.408671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.417798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.417815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.426172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.426189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.434893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.434911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.444266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.444284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.453174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.453191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.462675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.462693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.471507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.471524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.480642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.480660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.490121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.490138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.498987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.499004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.508070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.508088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.516499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.516516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.525596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.525613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.534529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.534546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.543674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.543692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.552244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.552261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 16734.50 IOPS, 130.74 MiB/s [2024-10-06 11:04:23.560754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.560772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.569558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.569575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.578800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.578818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.587314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.587331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.596044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.596068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.034 [2024-10-06 11:04:23.605298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.034 [2024-10-06 11:04:23.605315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.623874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.623893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.631395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.631412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.640941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.640959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.649889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.649906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.658513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.658531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.667027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.667045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.675734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.675751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.684604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.684622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.693640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.693658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.702877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.702895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.711626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.711645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.720062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.720079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.729686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.729704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.738518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.738536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.747598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.747615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.756829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.756846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.763776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.763793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.774345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.774362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.783261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.783279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.294 [2024-10-06 11:04:23.792624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.294 [2024-10-06 11:04:23.792642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.295 [2024-10-06 11:04:23.801831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.295 [2024-10-06 11:04:23.801848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.295 [2024-10-06 11:04:23.811047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.295 [2024-10-06 11:04:23.811071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.295 [2024-10-06 11:04:23.819572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.295 [2024-10-06 11:04:23.819589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.295 [2024-10-06 11:04:23.828172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.295 [2024-10-06 11:04:23.828190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.295 [2024-10-06 11:04:23.836612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.295 [2024-10-06 11:04:23.836630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.295 [2024-10-06 11:04:23.845754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.295 [2024-10-06 11:04:23.845771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.295 [2024-10-06 11:04:23.854385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.295 [2024-10-06 11:04:23.854402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.295 [2024-10-06 11:04:23.863613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.295 [2024-10-06 11:04:23.863630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.554 [2024-10-06 11:04:23.872086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:23.872105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:23.880772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:23.880790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:23.889736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:23.889753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:23.898811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:23.898828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:23.908260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:23.908277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:23.916891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:23.916908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:23.926321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:23.926339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:23.935649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:23.935666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:23.944943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:23.944961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:23.953397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:23.953414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:23.961851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:23.961868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:23.970496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:23.970514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:23.979888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:23.979906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:23.989002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:23.989020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:23.997920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:23.997937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:24.006463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:24.006480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:24.015727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:24.015744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:24.024963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:24.024980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:24.033576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:24.033593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:24.041877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:24.041894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:24.050418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:24.050435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:24.059387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:24.059405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:24.066409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:24.066431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:24.076047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:24.076069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:24.084859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:24.084876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:24.093892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:24.093909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:24.102332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:24.102349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:24.110907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:24.110926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:24.119000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:24.119017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.555 [2024-10-06 11:04:24.127662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.555 [2024-10-06 11:04:24.127680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.136792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.136810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.146012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.146030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.154897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.154915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.163481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.163499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.172633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.172652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.181942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.181959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.190969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.190986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.200254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.200273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.208799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.208818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.217615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.217634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.225957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.225974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.234838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.234860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.243973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.243991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.252585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.252604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.261706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.261725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.270963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.270982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.279418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.279435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.289195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.289213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.297876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.297894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.307575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.307593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.316818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.316837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.326113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.326131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.332937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.332954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.343899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.343918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.352568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.352585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.815 [2024-10-06 11:04:24.361527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.815 [2024-10-06 11:04:24.361547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.816 [2024-10-06 11:04:24.369979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.816 [2024-10-06 11:04:24.370000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.816 [2024-10-06 11:04:24.379788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.816 [2024-10-06 11:04:24.379806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.816 [2024-10-06 11:04:24.388561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.816 [2024-10-06 11:04:24.388579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.397778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.397797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.406538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.406560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.415218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.415236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.424456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.424473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.433073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.433092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.442292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.442311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.451489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.451507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.460710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.460729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.469313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.469331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.479069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.479104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.488336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.488354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.497029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.497048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.506173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.506191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.514781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.514800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.523324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.523341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.531925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.531943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.541165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.541183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.548048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.548072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 [2024-10-06 11:04:24.558289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.558306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 16758.20 IOPS, 130.92 MiB/s [2024-10-06 11:04:24.564764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.075 [2024-10-06 11:04:24.564781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.075 00:09:27.075 Latency(us) 00:09:27.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.076 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:27.076 Nvme1n1 : 5.01 16760.72 130.94 0.00 0.00 7630.38 2839.89 17226.61 00:09:27.076 =================================================================================================================== 00:09:27.076 Total : 16760.72 130.94 0.00 0.00 7630.38 2839.89 17226.61 00:09:27.076 [2024-10-06 11:04:24.572781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.076 [2024-10-06 11:04:24.572795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.076 [2024-10-06 11:04:24.580800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.076 [2024-10-06 11:04:24.580812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.076 [2024-10-06 11:04:24.588827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.076 [2024-10-06 11:04:24.588840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.076 [2024-10-06 11:04:24.596847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.076 [2024-10-06 11:04:24.596863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.076 [2024-10-06 11:04:24.604861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.076 [2024-10-06 11:04:24.604871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.076 [2024-10-06 11:04:24.612885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.076 [2024-10-06 11:04:24.612896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.076 [2024-10-06 11:04:24.620904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.076 [2024-10-06 11:04:24.620914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.076 [2024-10-06 11:04:24.628923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.076 [2024-10-06 11:04:24.628934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.076 [2024-10-06 11:04:24.636944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.076 [2024-10-06 11:04:24.636956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.076 [2024-10-06 11:04:24.644965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.076 [2024-10-06 11:04:24.644977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.335 [2024-10-06 11:04:24.652986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.335 [2024-10-06 11:04:24.652997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.335 [2024-10-06 11:04:24.661005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.335 [2024-10-06 11:04:24.661016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.335 [2024-10-06 11:04:24.669025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.335 [2024-10-06 11:04:24.669035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.335 [2024-10-06 11:04:24.677047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.335 [2024-10-06 11:04:24.677056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.335 [2024-10-06 11:04:24.685078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.335 [2024-10-06 11:04:24.685088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.335 [2024-10-06 11:04:24.693114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.335 [2024-10-06 11:04:24.693125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.335 [2024-10-06 11:04:24.701132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.335 [2024-10-06 11:04:24.701142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.335 [2024-10-06 11:04:24.709137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.335 [2024-10-06 11:04:24.709146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.335 [2024-10-06 11:04:24.717157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.335 [2024-10-06 11:04:24.717168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.335 [2024-10-06 11:04:24.725181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.335 [2024-10-06 11:04:24.725190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.335 [2024-10-06 11:04:24.733198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.335 [2024-10-06 11:04:24.733207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1915433) - No such process 00:09:27.335 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1915433 00:09:27.335 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.335 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.335 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.335 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.335 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:27.335 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.335 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.335 delay0 00:09:27.335 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.335 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:27.335 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.335 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.335 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.335 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:27.335 [2024-10-06 11:04:24.901238] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:33.906 Initializing NVMe Controllers 00:09:33.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:33.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:33.906 Initialization complete. Launching workers. 00:09:33.906 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 74 00:09:33.906 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 361, failed to submit 33 00:09:33.906 success 161, unsuccessful 200, failed 0 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:33.906 rmmod nvme_tcp 00:09:33.906 rmmod nvme_fabrics 00:09:33.906 rmmod nvme_keyring 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1913625 ']' 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1913625 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1913625 ']' 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1913625 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1913625 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1913625' 00:09:33.906 killing process with pid 1913625 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1913625 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1913625 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.906 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:36.444 00:09:36.444 real 0m30.885s 00:09:36.444 user 0m41.936s 00:09:36.444 sys 0m11.017s 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.444 ************************************ 00:09:36.444 END TEST nvmf_zcopy 00:09:36.444 ************************************ 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.444 ************************************ 00:09:36.444 START TEST nvmf_nmic 00:09:36.444 ************************************ 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:36.444 * Looking for test storage... 00:09:36.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:36.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.444 --rc genhtml_branch_coverage=1 00:09:36.444 --rc genhtml_function_coverage=1 00:09:36.444 --rc genhtml_legend=1 00:09:36.444 --rc geninfo_all_blocks=1 00:09:36.444 --rc geninfo_unexecuted_blocks=1 00:09:36.444 00:09:36.444 ' 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:36.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.444 --rc genhtml_branch_coverage=1 00:09:36.444 --rc genhtml_function_coverage=1 00:09:36.444 --rc genhtml_legend=1 00:09:36.444 --rc geninfo_all_blocks=1 00:09:36.444 --rc geninfo_unexecuted_blocks=1 00:09:36.444 00:09:36.444 ' 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:36.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.444 --rc genhtml_branch_coverage=1 00:09:36.444 --rc genhtml_function_coverage=1 00:09:36.444 --rc genhtml_legend=1 00:09:36.444 --rc geninfo_all_blocks=1 00:09:36.444 --rc geninfo_unexecuted_blocks=1 00:09:36.444 00:09:36.444 ' 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:36.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.444 --rc genhtml_branch_coverage=1 00:09:36.444 --rc genhtml_function_coverage=1 00:09:36.444 --rc genhtml_legend=1 00:09:36.444 --rc geninfo_all_blocks=1 00:09:36.444 --rc geninfo_unexecuted_blocks=1 00:09:36.444 00:09:36.444 ' 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.444 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:36.445 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:41.768 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.768 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:41.768 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:41.768 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:41.768 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:41.768 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:41.768 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:41.768 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:41.768 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:41.769 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:41.769 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:41.769 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:41.769 Found net devices under 0000:af:00.0: cvl_0_0 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:41.769 Found net devices under 0000:af:00.1: cvl_0_1 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:41.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:09:41.769 00:09:41.769 --- 10.0.0.2 ping statistics --- 00:09:41.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.769 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:09:41.769 00:09:41.769 --- 10.0.0.1 ping statistics --- 00:09:41.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.769 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1920798 00:09:41.769 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1920798 00:09:41.770 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:41.770 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1920798 ']' 00:09:41.770 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.770 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.770 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.770 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.770 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.042 [2024-10-06 11:04:39.343955] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:09:42.042 [2024-10-06 11:04:39.344000] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.042 [2024-10-06 11:04:39.406200] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.042 [2024-10-06 11:04:39.448264] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.042 [2024-10-06 11:04:39.448303] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.042 [2024-10-06 11:04:39.448311] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.042 [2024-10-06 11:04:39.448317] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.042 [2024-10-06 11:04:39.448322] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.042 [2024-10-06 11:04:39.449871] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.042 [2024-10-06 11:04:39.449892] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.042 [2024-10-06 11:04:39.449959] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.042 [2024-10-06 11:04:39.449961] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.042 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:42.042 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:42.042 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:42.042 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:42.042 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.042 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.042 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:42.042 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.042 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.042 [2024-10-06 11:04:39.595032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.042 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.042 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:42.042 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.042 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.302 Malloc0 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.302 [2024-10-06 11:04:39.646330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:42.302 test case1: single bdev can't be used in multiple subsystems 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.302 [2024-10-06 11:04:39.674210] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:42.302 [2024-10-06 11:04:39.674230] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:42.302 [2024-10-06 11:04:39.674237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.302 request: 00:09:42.302 { 00:09:42.302 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:42.302 "namespace": { 00:09:42.302 "bdev_name": "Malloc0", 00:09:42.302 "no_auto_visible": false 00:09:42.302 }, 00:09:42.302 "method": "nvmf_subsystem_add_ns", 00:09:42.302 "req_id": 1 00:09:42.302 } 00:09:42.302 Got JSON-RPC error response 00:09:42.302 response: 00:09:42.302 { 00:09:42.302 "code": -32602, 00:09:42.302 "message": "Invalid parameters" 00:09:42.302 } 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:42.302 Adding namespace failed - expected result. 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:42.302 test case2: host connect to nvmf target in multiple paths 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.302 [2024-10-06 11:04:39.686347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.302 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:43.681 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:44.619 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:44.619 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:44.619 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:44.619 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:44.619 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:46.524 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:46.524 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:46.524 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:46.524 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:46.794 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:46.794 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:46.794 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:46.794 [global] 00:09:46.794 thread=1 00:09:46.794 invalidate=1 00:09:46.794 rw=write 00:09:46.794 time_based=1 00:09:46.794 runtime=1 00:09:46.794 ioengine=libaio 00:09:46.794 direct=1 00:09:46.794 bs=4096 00:09:46.794 iodepth=1 00:09:46.794 norandommap=0 00:09:46.794 numjobs=1 00:09:46.794 00:09:46.794 verify_dump=1 00:09:46.794 verify_backlog=512 00:09:46.794 verify_state_save=0 00:09:46.794 do_verify=1 00:09:46.794 verify=crc32c-intel 00:09:46.794 [job0] 00:09:46.794 filename=/dev/nvme0n1 00:09:46.794 Could not set queue depth (nvme0n1) 00:09:47.052 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.052 fio-3.35 00:09:47.052 Starting 1 thread 00:09:48.427 00:09:48.427 job0: (groupid=0, jobs=1): err= 0: pid=1921752: Sun Oct 6 11:04:45 2024 00:09:48.427 read: IOPS=21, BW=84.6KiB/s (86.6kB/s)(88.0KiB/1040msec) 00:09:48.427 slat (nsec): min=9417, max=26970, avg=22622.91, stdev=3210.64 00:09:48.427 clat (usec): min=40710, max=42003, avg=41089.00, stdev=369.35 00:09:48.427 lat (usec): min=40719, max=42027, avg=41111.63, stdev=369.93 00:09:48.427 clat percentiles (usec): 00:09:48.427 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:48.427 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:48.427 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:48.427 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:48.427 | 99.99th=[42206] 00:09:48.427 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:09:48.427 slat (usec): min=9, max=27529, avg=64.77, stdev=1216.17 00:09:48.427 clat (usec): min=156, max=426, avg=196.00, stdev=33.97 00:09:48.427 lat (usec): min=166, max=27917, avg=260.77, stdev=1225.14 00:09:48.427 clat percentiles (usec): 00:09:48.427 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 167], 00:09:48.427 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 182], 60.00th=[ 210], 00:09:48.427 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 239], 95.00th=[ 243], 00:09:48.427 | 99.00th=[ 265], 99.50th=[ 388], 99.90th=[ 429], 99.95th=[ 429], 00:09:48.427 | 99.99th=[ 429] 00:09:48.427 bw ( KiB/s): min= 4087, max= 4087, per=100.00%, avg=4087.00, stdev= 0.00, samples=1 00:09:48.427 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:48.427 lat (usec) : 250=93.45%, 500=2.43% 00:09:48.427 lat (msec) : 50=4.12% 00:09:48.427 cpu : usr=0.19%, sys=0.58%, ctx=537, majf=0, minf=1 00:09:48.427 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:48.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.427 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.427 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:48.427 00:09:48.427 Run status group 0 (all jobs): 00:09:48.427 READ: bw=84.6KiB/s (86.6kB/s), 84.6KiB/s-84.6KiB/s (86.6kB/s-86.6kB/s), io=88.0KiB (90.1kB), run=1040-1040msec 00:09:48.427 WRITE: bw=1969KiB/s (2016kB/s), 1969KiB/s-1969KiB/s (2016kB/s-2016kB/s), io=2048KiB (2097kB), run=1040-1040msec 00:09:48.427 00:09:48.427 Disk stats (read/write): 00:09:48.427 nvme0n1: ios=44/512, merge=0/0, ticks=1730/98, in_queue=1828, util=98.50% 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:48.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:48.427 rmmod nvme_tcp 00:09:48.427 rmmod nvme_fabrics 00:09:48.427 rmmod nvme_keyring 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1920798 ']' 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1920798 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1920798 ']' 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1920798 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1920798 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1920798' 00:09:48.427 killing process with pid 1920798 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1920798 00:09:48.427 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1920798 00:09:48.685 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:48.685 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:48.685 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:48.685 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:48.685 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:48.685 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:48.685 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:48.685 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:48.685 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:48.685 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.685 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.685 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:51.220 00:09:51.220 real 0m14.646s 00:09:51.220 user 0m33.350s 00:09:51.220 sys 0m4.850s 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.220 ************************************ 00:09:51.220 END TEST nvmf_nmic 00:09:51.220 ************************************ 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:51.220 ************************************ 00:09:51.220 START TEST nvmf_fio_target 00:09:51.220 ************************************ 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:51.220 * Looking for test storage... 00:09:51.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:51.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.220 --rc genhtml_branch_coverage=1 00:09:51.220 --rc genhtml_function_coverage=1 00:09:51.220 --rc genhtml_legend=1 00:09:51.220 --rc geninfo_all_blocks=1 00:09:51.220 --rc geninfo_unexecuted_blocks=1 00:09:51.220 00:09:51.220 ' 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:51.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.220 --rc genhtml_branch_coverage=1 00:09:51.220 --rc genhtml_function_coverage=1 00:09:51.220 --rc genhtml_legend=1 00:09:51.220 --rc geninfo_all_blocks=1 00:09:51.220 --rc geninfo_unexecuted_blocks=1 00:09:51.220 00:09:51.220 ' 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:51.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.220 --rc genhtml_branch_coverage=1 00:09:51.220 --rc genhtml_function_coverage=1 00:09:51.220 --rc genhtml_legend=1 00:09:51.220 --rc geninfo_all_blocks=1 00:09:51.220 --rc geninfo_unexecuted_blocks=1 00:09:51.220 00:09:51.220 ' 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:51.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.220 --rc genhtml_branch_coverage=1 00:09:51.220 --rc genhtml_function_coverage=1 00:09:51.220 --rc genhtml_legend=1 00:09:51.220 --rc geninfo_all_blocks=1 00:09:51.220 --rc geninfo_unexecuted_blocks=1 00:09:51.220 00:09:51.220 ' 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.220 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:51.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:51.221 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:56.495 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:56.495 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:56.495 Found net devices under 0000:af:00.0: cvl_0_0 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:56.495 Found net devices under 0000:af:00.1: cvl_0_1 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:56.495 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:56.496 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:56.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:09:56.496 00:09:56.496 --- 10.0.0.2 ping statistics --- 00:09:56.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.496 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:09:56.496 00:09:56.496 --- 10.0.0.1 ping statistics --- 00:09:56.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.496 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1925454 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1925454 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1925454 ']' 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:56.496 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.754 [2024-10-06 11:04:54.107724] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:09:56.754 [2024-10-06 11:04:54.107772] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.754 [2024-10-06 11:04:54.165886] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.754 [2024-10-06 11:04:54.205914] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.754 [2024-10-06 11:04:54.205957] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.754 [2024-10-06 11:04:54.205964] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.754 [2024-10-06 11:04:54.205969] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.754 [2024-10-06 11:04:54.205974] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.754 [2024-10-06 11:04:54.207497] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.754 [2024-10-06 11:04:54.207596] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.754 [2024-10-06 11:04:54.207688] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.754 [2024-10-06 11:04:54.207689] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.754 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:56.754 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:56.754 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:56.754 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:56.754 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.012 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.012 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:57.012 [2024-10-06 11:04:54.529657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.012 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:57.271 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:57.271 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:57.529 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:57.529 11:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:57.787 11:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:57.787 11:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:58.045 11:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:58.045 11:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:58.045 11:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:58.303 11:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:58.303 11:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:58.561 11:04:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:58.561 11:04:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:58.818 11:04:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:58.818 11:04:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:59.075 11:04:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:59.075 11:04:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:59.075 11:04:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:59.334 11:04:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:59.334 11:04:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:59.592 11:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.850 [2024-10-06 11:04:57.224144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.850 11:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:00.107 11:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:00.107 11:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:01.477 11:04:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:01.477 11:04:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:01.477 11:04:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:01.477 11:04:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:01.477 11:04:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:01.477 11:04:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:03.376 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:03.376 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:03.376 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:03.376 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:03.376 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:03.376 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:03.376 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:03.376 [global] 00:10:03.376 thread=1 00:10:03.376 invalidate=1 00:10:03.376 rw=write 00:10:03.376 time_based=1 00:10:03.376 runtime=1 00:10:03.376 ioengine=libaio 00:10:03.376 direct=1 00:10:03.376 bs=4096 00:10:03.376 iodepth=1 00:10:03.376 norandommap=0 00:10:03.376 numjobs=1 00:10:03.376 00:10:03.376 verify_dump=1 00:10:03.376 verify_backlog=512 00:10:03.376 verify_state_save=0 00:10:03.376 do_verify=1 00:10:03.376 verify=crc32c-intel 00:10:03.376 [job0] 00:10:03.376 filename=/dev/nvme0n1 00:10:03.376 [job1] 00:10:03.376 filename=/dev/nvme0n2 00:10:03.376 [job2] 00:10:03.376 filename=/dev/nvme0n3 00:10:03.376 [job3] 00:10:03.376 filename=/dev/nvme0n4 00:10:03.376 Could not set queue depth (nvme0n1) 00:10:03.376 Could not set queue depth (nvme0n2) 00:10:03.376 Could not set queue depth (nvme0n3) 00:10:03.376 Could not set queue depth (nvme0n4) 00:10:03.634 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.634 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.634 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.634 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.634 fio-3.35 00:10:03.634 Starting 4 threads 00:10:05.008 00:10:05.008 job0: (groupid=0, jobs=1): err= 0: pid=1926851: Sun Oct 6 11:05:02 2024 00:10:05.008 read: IOPS=1791, BW=7165KiB/s (7337kB/s)(7172KiB/1001msec) 00:10:05.008 slat (nsec): min=7131, max=44166, avg=8130.37, stdev=1912.35 00:10:05.008 clat (usec): min=236, max=454, avg=317.27, stdev=23.71 00:10:05.008 lat (usec): min=243, max=463, avg=325.41, stdev=23.83 00:10:05.008 clat percentiles (usec): 00:10:05.008 | 1.00th=[ 253], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 306], 00:10:05.008 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:10:05.008 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 338], 95.00th=[ 347], 00:10:05.008 | 99.00th=[ 420], 99.50th=[ 437], 99.90th=[ 457], 99.95th=[ 457], 00:10:05.008 | 99.99th=[ 457] 00:10:05.008 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:05.008 slat (nsec): min=10642, max=39317, avg=11907.96, stdev=1667.98 00:10:05.008 clat (usec): min=145, max=400, avg=184.32, stdev=18.81 00:10:05.008 lat (usec): min=157, max=413, avg=196.23, stdev=19.12 00:10:05.008 clat percentiles (usec): 00:10:05.008 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 169], 00:10:05.008 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:10:05.008 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 208], 95.00th=[ 217], 00:10:05.008 | 99.00th=[ 245], 99.50th=[ 269], 99.90th=[ 293], 99.95th=[ 338], 00:10:05.008 | 99.99th=[ 400] 00:10:05.008 bw ( KiB/s): min= 8192, max= 8192, per=56.26%, avg=8192.00, stdev= 0.00, samples=1 00:10:05.008 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:05.008 lat (usec) : 250=53.19%, 500=46.81% 00:10:05.008 cpu : usr=3.00%, sys=6.50%, ctx=3843, majf=0, minf=1 00:10:05.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.008 issued rwts: total=1793,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.008 job1: (groupid=0, jobs=1): err= 0: pid=1926868: Sun Oct 6 11:05:02 2024 00:10:05.008 read: IOPS=22, BW=89.2KiB/s (91.4kB/s)(92.0KiB/1031msec) 00:10:05.008 slat (nsec): min=9644, max=27245, avg=22051.48, stdev=3981.99 00:10:05.008 clat (usec): min=350, max=42043, avg=39830.41, stdev=8618.25 00:10:05.008 lat (usec): min=373, max=42066, avg=39852.46, stdev=8618.07 00:10:05.008 clat percentiles (usec): 00:10:05.008 | 1.00th=[ 351], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:05.008 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:05.008 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:05.008 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:05.008 | 99.99th=[42206] 00:10:05.008 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:10:05.008 slat (nsec): min=9769, max=37715, avg=10777.00, stdev=1584.91 00:10:05.008 clat (usec): min=153, max=370, avg=203.60, stdev=23.91 00:10:05.008 lat (usec): min=166, max=407, avg=214.38, stdev=24.32 00:10:05.008 clat percentiles (usec): 00:10:05.008 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:10:05.008 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 206], 00:10:05.008 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 241], 00:10:05.008 | 99.00th=[ 302], 99.50th=[ 306], 99.90th=[ 371], 99.95th=[ 371], 00:10:05.008 | 99.99th=[ 371] 00:10:05.008 bw ( KiB/s): min= 4096, max= 4096, per=28.13%, avg=4096.00, stdev= 0.00, samples=1 00:10:05.008 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:05.008 lat (usec) : 250=92.71%, 500=3.18% 00:10:05.008 lat (msec) : 50=4.11% 00:10:05.008 cpu : usr=0.19%, sys=0.58%, ctx=536, majf=0, minf=1 00:10:05.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.008 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.008 job2: (groupid=0, jobs=1): err= 0: pid=1926880: Sun Oct 6 11:05:02 2024 00:10:05.008 read: IOPS=22, BW=89.8KiB/s (92.0kB/s)(92.0KiB/1024msec) 00:10:05.008 slat (nsec): min=9945, max=29001, avg=22505.48, stdev=3876.87 00:10:05.008 clat (usec): min=373, max=42006, avg=39297.25, stdev=8489.81 00:10:05.008 lat (usec): min=396, max=42028, avg=39319.76, stdev=8489.55 00:10:05.008 clat percentiles (usec): 00:10:05.008 | 1.00th=[ 375], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:05.008 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:05.008 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:10:05.008 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:05.008 | 99.99th=[42206] 00:10:05.008 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:10:05.008 slat (nsec): min=4716, max=45206, avg=11555.39, stdev=4145.30 00:10:05.008 clat (usec): min=170, max=408, avg=212.66, stdev=20.19 00:10:05.008 lat (usec): min=178, max=420, avg=224.22, stdev=21.85 00:10:05.008 clat percentiles (usec): 00:10:05.008 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 198], 00:10:05.008 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 217], 00:10:05.008 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 235], 95.00th=[ 241], 00:10:05.008 | 99.00th=[ 260], 99.50th=[ 265], 99.90th=[ 408], 99.95th=[ 408], 00:10:05.008 | 99.99th=[ 408] 00:10:05.008 bw ( KiB/s): min= 4096, max= 4096, per=28.13%, avg=4096.00, stdev= 0.00, samples=1 00:10:05.008 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:05.008 lat (usec) : 250=93.64%, 500=2.24% 00:10:05.008 lat (msec) : 50=4.11% 00:10:05.008 cpu : usr=0.10%, sys=1.08%, ctx=538, majf=0, minf=1 00:10:05.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.008 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.008 job3: (groupid=0, jobs=1): err= 0: pid=1926890: Sun Oct 6 11:05:02 2024 00:10:05.008 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:05.008 slat (nsec): min=6902, max=26935, avg=8244.28, stdev=2824.07 00:10:05.008 clat (usec): min=312, max=42126, avg=1635.01, stdev=7186.08 00:10:05.008 lat (usec): min=320, max=42135, avg=1643.25, stdev=7188.51 00:10:05.008 clat percentiles (usec): 00:10:05.008 | 1.00th=[ 322], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 334], 00:10:05.008 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 351], 00:10:05.008 | 70.00th=[ 355], 80.00th=[ 359], 90.00th=[ 363], 95.00th=[ 375], 00:10:05.008 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:05.008 | 99.99th=[42206] 00:10:05.008 write: IOPS=680, BW=2721KiB/s (2787kB/s)(2724KiB/1001msec); 0 zone resets 00:10:05.008 slat (usec): min=9, max=124, avg=15.51, stdev=13.15 00:10:05.008 clat (usec): min=125, max=381, avg=212.60, stdev=24.29 00:10:05.008 lat (usec): min=174, max=406, avg=228.11, stdev=29.68 00:10:05.008 clat percentiles (usec): 00:10:05.008 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 194], 00:10:05.008 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:10:05.008 | 70.00th=[ 221], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 253], 00:10:05.008 | 99.00th=[ 302], 99.50th=[ 302], 99.90th=[ 383], 99.95th=[ 383], 00:10:05.008 | 99.99th=[ 383] 00:10:05.008 bw ( KiB/s): min= 4096, max= 4096, per=28.13%, avg=4096.00, stdev= 0.00, samples=1 00:10:05.008 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:05.008 lat (usec) : 250=54.07%, 500=44.59% 00:10:05.008 lat (msec) : 50=1.34% 00:10:05.008 cpu : usr=0.20%, sys=2.00%, ctx=1194, majf=0, minf=2 00:10:05.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.008 issued rwts: total=512,681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.008 00:10:05.008 Run status group 0 (all jobs): 00:10:05.008 READ: bw=9121KiB/s (9340kB/s), 89.2KiB/s-7165KiB/s (91.4kB/s-7337kB/s), io=9404KiB (9630kB), run=1001-1031msec 00:10:05.008 WRITE: bw=14.2MiB/s (14.9MB/s), 1986KiB/s-8184KiB/s (2034kB/s-8380kB/s), io=14.7MiB (15.4MB), run=1001-1031msec 00:10:05.008 00:10:05.008 Disk stats (read/write): 00:10:05.008 nvme0n1: ios=1560/1709, merge=0/0, ticks=1328/299, in_queue=1627, util=85.07% 00:10:05.008 nvme0n2: ios=40/512, merge=0/0, ticks=1578/102, in_queue=1680, util=89.01% 00:10:05.008 nvme0n3: ios=75/512, merge=0/0, ticks=1564/105, in_queue=1669, util=93.00% 00:10:05.008 nvme0n4: ios=247/512, merge=0/0, ticks=814/99, in_queue=913, util=95.79% 00:10:05.008 11:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:05.008 [global] 00:10:05.008 thread=1 00:10:05.008 invalidate=1 00:10:05.008 rw=randwrite 00:10:05.008 time_based=1 00:10:05.008 runtime=1 00:10:05.008 ioengine=libaio 00:10:05.008 direct=1 00:10:05.008 bs=4096 00:10:05.008 iodepth=1 00:10:05.008 norandommap=0 00:10:05.008 numjobs=1 00:10:05.008 00:10:05.008 verify_dump=1 00:10:05.008 verify_backlog=512 00:10:05.008 verify_state_save=0 00:10:05.008 do_verify=1 00:10:05.008 verify=crc32c-intel 00:10:05.008 [job0] 00:10:05.008 filename=/dev/nvme0n1 00:10:05.008 [job1] 00:10:05.008 filename=/dev/nvme0n2 00:10:05.008 [job2] 00:10:05.008 filename=/dev/nvme0n3 00:10:05.008 [job3] 00:10:05.008 filename=/dev/nvme0n4 00:10:05.008 Could not set queue depth (nvme0n1) 00:10:05.008 Could not set queue depth (nvme0n2) 00:10:05.008 Could not set queue depth (nvme0n3) 00:10:05.008 Could not set queue depth (nvme0n4) 00:10:05.266 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.266 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.266 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.266 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.266 fio-3.35 00:10:05.266 Starting 4 threads 00:10:06.660 00:10:06.660 job0: (groupid=0, jobs=1): err= 0: pid=1927339: Sun Oct 6 11:05:03 2024 00:10:06.660 read: IOPS=1877, BW=7508KiB/s (7689kB/s)(7516KiB/1001msec) 00:10:06.660 slat (nsec): min=6687, max=34229, avg=8284.52, stdev=2536.88 00:10:06.660 clat (usec): min=184, max=574, avg=302.36, stdev=47.30 00:10:06.660 lat (usec): min=191, max=582, avg=310.65, stdev=47.37 00:10:06.660 clat percentiles (usec): 00:10:06.660 | 1.00th=[ 215], 5.00th=[ 233], 10.00th=[ 243], 20.00th=[ 260], 00:10:06.660 | 30.00th=[ 277], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 314], 00:10:06.660 | 70.00th=[ 322], 80.00th=[ 326], 90.00th=[ 338], 95.00th=[ 371], 00:10:06.660 | 99.00th=[ 469], 99.50th=[ 486], 99.90th=[ 515], 99.95th=[ 578], 00:10:06.660 | 99.99th=[ 578] 00:10:06.660 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:06.660 slat (nsec): min=9082, max=39604, avg=10506.71, stdev=1647.69 00:10:06.660 clat (usec): min=134, max=309, avg=187.43, stdev=26.81 00:10:06.660 lat (usec): min=145, max=332, avg=197.93, stdev=27.12 00:10:06.660 clat percentiles (usec): 00:10:06.660 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:10:06.660 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 188], 00:10:06.660 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 237], 00:10:06.660 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 297], 99.95th=[ 306], 00:10:06.660 | 99.99th=[ 310] 00:10:06.661 bw ( KiB/s): min= 8192, max= 8192, per=42.41%, avg=8192.00, stdev= 0.00, samples=1 00:10:06.661 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:06.661 lat (usec) : 250=58.06%, 500=41.81%, 750=0.13% 00:10:06.661 cpu : usr=2.30%, sys=3.80%, ctx=3927, majf=0, minf=1 00:10:06.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.661 issued rwts: total=1879,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.661 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.661 job1: (groupid=0, jobs=1): err= 0: pid=1927353: Sun Oct 6 11:05:03 2024 00:10:06.661 read: IOPS=20, BW=83.7KiB/s (85.8kB/s)(84.0KiB/1003msec) 00:10:06.661 slat (nsec): min=9362, max=25977, avg=23965.14, stdev=3426.30 00:10:06.661 clat (usec): min=40881, max=42029, avg=41352.17, stdev=489.42 00:10:06.661 lat (usec): min=40905, max=42055, avg=41376.14, stdev=489.98 00:10:06.661 clat percentiles (usec): 00:10:06.661 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:06.661 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:06.661 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:06.661 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:06.661 | 99.99th=[42206] 00:10:06.661 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:10:06.661 slat (nsec): min=9609, max=49080, avg=11610.95, stdev=2425.46 00:10:06.661 clat (usec): min=170, max=368, avg=247.15, stdev=42.87 00:10:06.661 lat (usec): min=182, max=417, avg=258.76, stdev=43.04 00:10:06.661 clat percentiles (usec): 00:10:06.661 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 210], 00:10:06.661 | 30.00th=[ 219], 40.00th=[ 229], 50.00th=[ 239], 60.00th=[ 251], 00:10:06.661 | 70.00th=[ 273], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 322], 00:10:06.661 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 367], 99.95th=[ 367], 00:10:06.661 | 99.99th=[ 367] 00:10:06.661 bw ( KiB/s): min= 4096, max= 4096, per=21.20%, avg=4096.00, stdev= 0.00, samples=1 00:10:06.661 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:06.661 lat (usec) : 250=56.66%, 500=39.40% 00:10:06.661 lat (msec) : 50=3.94% 00:10:06.661 cpu : usr=0.20%, sys=1.20%, ctx=533, majf=0, minf=1 00:10:06.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.661 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.661 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.661 job2: (groupid=0, jobs=1): err= 0: pid=1927354: Sun Oct 6 11:05:03 2024 00:10:06.661 read: IOPS=321, BW=1287KiB/s (1318kB/s)(1288KiB/1001msec) 00:10:06.661 slat (nsec): min=6993, max=28381, avg=8906.59, stdev=4104.90 00:10:06.661 clat (usec): min=270, max=42013, avg=2746.65, stdev=9683.68 00:10:06.661 lat (usec): min=278, max=42036, avg=2755.55, stdev=9687.21 00:10:06.661 clat percentiles (usec): 00:10:06.661 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 302], 00:10:06.661 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 330], 00:10:06.661 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 383], 95.00th=[41157], 00:10:06.661 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:06.661 | 99.99th=[42206] 00:10:06.661 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:06.661 slat (nsec): min=9604, max=64286, avg=11493.22, stdev=5888.25 00:10:06.661 clat (usec): min=178, max=434, avg=204.37, stdev=20.01 00:10:06.661 lat (usec): min=188, max=474, avg=215.86, stdev=23.15 00:10:06.661 clat percentiles (usec): 00:10:06.661 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:10:06.661 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 204], 00:10:06.661 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 219], 95.00th=[ 227], 00:10:06.661 | 99.00th=[ 260], 99.50th=[ 338], 99.90th=[ 433], 99.95th=[ 433], 00:10:06.661 | 99.99th=[ 433] 00:10:06.661 bw ( KiB/s): min= 4096, max= 4096, per=21.20%, avg=4096.00, stdev= 0.00, samples=1 00:10:06.661 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:06.661 lat (usec) : 250=60.67%, 500=36.81%, 750=0.12%, 1000=0.12% 00:10:06.661 lat (msec) : 50=2.28% 00:10:06.661 cpu : usr=0.30%, sys=0.90%, ctx=836, majf=0, minf=1 00:10:06.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.661 issued rwts: total=322,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.661 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.661 job3: (groupid=0, jobs=1): err= 0: pid=1927356: Sun Oct 6 11:05:03 2024 00:10:06.661 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:06.661 slat (nsec): min=6736, max=25125, avg=7687.46, stdev=1065.67 00:10:06.661 clat (usec): min=305, max=773, avg=373.49, stdev=37.94 00:10:06.661 lat (usec): min=313, max=781, avg=381.18, stdev=38.00 00:10:06.661 clat percentiles (usec): 00:10:06.661 | 1.00th=[ 326], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 351], 00:10:06.661 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 363], 60.00th=[ 371], 00:10:06.661 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 433], 95.00th=[ 449], 00:10:06.661 | 99.00th=[ 498], 99.50th=[ 562], 99.90th=[ 725], 99.95th=[ 775], 00:10:06.661 | 99.99th=[ 775] 00:10:06.661 write: IOPS=1770, BW=7081KiB/s (7251kB/s)(7088KiB/1001msec); 0 zone resets 00:10:06.661 slat (nsec): min=9426, max=37669, avg=10658.10, stdev=1248.51 00:10:06.661 clat (usec): min=164, max=515, avg=218.88, stdev=37.52 00:10:06.661 lat (usec): min=174, max=525, avg=229.54, stdev=37.66 00:10:06.661 clat percentiles (usec): 00:10:06.661 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:10:06.661 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 217], 00:10:06.661 | 70.00th=[ 227], 80.00th=[ 243], 90.00th=[ 289], 95.00th=[ 302], 00:10:06.661 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 429], 99.95th=[ 515], 00:10:06.661 | 99.99th=[ 515] 00:10:06.661 bw ( KiB/s): min= 8192, max= 8192, per=42.41%, avg=8192.00, stdev= 0.00, samples=1 00:10:06.661 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:06.661 lat (usec) : 250=45.07%, 500=54.44%, 750=0.45%, 1000=0.03% 00:10:06.661 cpu : usr=2.00%, sys=3.00%, ctx=3311, majf=0, minf=1 00:10:06.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.661 issued rwts: total=1536,1772,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.661 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.661 00:10:06.661 Run status group 0 (all jobs): 00:10:06.661 READ: bw=14.6MiB/s (15.3MB/s), 83.7KiB/s-7508KiB/s (85.8kB/s-7689kB/s), io=14.7MiB (15.4MB), run=1001-1003msec 00:10:06.661 WRITE: bw=18.9MiB/s (19.8MB/s), 2042KiB/s-8184KiB/s (2091kB/s-8380kB/s), io=18.9MiB (19.8MB), run=1001-1003msec 00:10:06.661 00:10:06.661 Disk stats (read/write): 00:10:06.661 nvme0n1: ios=1586/1909, merge=0/0, ticks=470/344, in_queue=814, util=87.17% 00:10:06.661 nvme0n2: ios=67/512, merge=0/0, ticks=772/125, in_queue=897, util=91.47% 00:10:06.661 nvme0n3: ios=59/512, merge=0/0, ticks=1589/100, in_queue=1689, util=96.57% 00:10:06.661 nvme0n4: ios=1364/1536, merge=0/0, ticks=1406/315, in_queue=1721, util=97.17% 00:10:06.661 11:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:06.661 [global] 00:10:06.661 thread=1 00:10:06.661 invalidate=1 00:10:06.661 rw=write 00:10:06.661 time_based=1 00:10:06.661 runtime=1 00:10:06.661 ioengine=libaio 00:10:06.661 direct=1 00:10:06.661 bs=4096 00:10:06.661 iodepth=128 00:10:06.661 norandommap=0 00:10:06.661 numjobs=1 00:10:06.661 00:10:06.661 verify_dump=1 00:10:06.661 verify_backlog=512 00:10:06.661 verify_state_save=0 00:10:06.661 do_verify=1 00:10:06.661 verify=crc32c-intel 00:10:06.661 [job0] 00:10:06.661 filename=/dev/nvme0n1 00:10:06.661 [job1] 00:10:06.661 filename=/dev/nvme0n2 00:10:06.661 [job2] 00:10:06.661 filename=/dev/nvme0n3 00:10:06.661 [job3] 00:10:06.661 filename=/dev/nvme0n4 00:10:06.661 Could not set queue depth (nvme0n1) 00:10:06.661 Could not set queue depth (nvme0n2) 00:10:06.661 Could not set queue depth (nvme0n3) 00:10:06.661 Could not set queue depth (nvme0n4) 00:10:07.043 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:07.043 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:07.043 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:07.043 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:07.043 fio-3.35 00:10:07.043 Starting 4 threads 00:10:08.026 00:10:08.026 job0: (groupid=0, jobs=1): err= 0: pid=1927722: Sun Oct 6 11:05:05 2024 00:10:08.026 read: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec) 00:10:08.026 slat (nsec): min=1595, max=16561k, avg=159757.48, stdev=966018.31 00:10:08.026 clat (usec): min=8526, max=51405, avg=19099.18, stdev=8892.07 00:10:08.026 lat (usec): min=8533, max=51435, avg=19258.94, stdev=8951.99 00:10:08.026 clat percentiles (usec): 00:10:08.026 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[10683], 20.00th=[11863], 00:10:08.026 | 30.00th=[13566], 40.00th=[16319], 50.00th=[18220], 60.00th=[18744], 00:10:08.026 | 70.00th=[19006], 80.00th=[21365], 90.00th=[35390], 95.00th=[40109], 00:10:08.026 | 99.00th=[47449], 99.50th=[49546], 99.90th=[50070], 99.95th=[50594], 00:10:08.026 | 99.99th=[51643] 00:10:08.026 write: IOPS=2478, BW=9913KiB/s (10.1MB/s)(9992KiB/1008msec); 0 zone resets 00:10:08.026 slat (usec): min=2, max=11745, avg=260.68, stdev=1135.58 00:10:08.026 clat (usec): min=1706, max=113119, avg=35502.60, stdev=25368.21 00:10:08.026 lat (usec): min=1720, max=113130, avg=35763.28, stdev=25526.85 00:10:08.026 clat percentiles (msec): 00:10:08.026 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 17], 20.00th=[ 21], 00:10:08.026 | 30.00th=[ 22], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 31], 00:10:08.026 | 70.00th=[ 38], 80.00th=[ 45], 90.00th=[ 84], 95.00th=[ 102], 00:10:08.026 | 99.00th=[ 106], 99.50th=[ 111], 99.90th=[ 113], 99.95th=[ 113], 00:10:08.026 | 99.99th=[ 113] 00:10:08.026 bw ( KiB/s): min= 6672, max=12312, per=13.26%, avg=9492.00, stdev=3988.08, samples=2 00:10:08.026 iops : min= 1668, max= 3078, avg=2373.00, stdev=997.02, samples=2 00:10:08.026 lat (msec) : 2=0.07%, 4=0.35%, 10=2.66%, 20=40.74%, 50=45.91% 00:10:08.026 lat (msec) : 100=7.26%, 250=3.01% 00:10:08.026 cpu : usr=1.89%, sys=3.57%, ctx=355, majf=0, minf=1 00:10:08.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:08.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.026 issued rwts: total=2048,2498,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.026 job1: (groupid=0, jobs=1): err= 0: pid=1927723: Sun Oct 6 11:05:05 2024 00:10:08.026 read: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec) 00:10:08.026 slat (nsec): min=1450, max=9146.2k, avg=90291.29, stdev=630682.85 00:10:08.026 clat (usec): min=3589, max=19466, avg=11136.38, stdev=2492.15 00:10:08.026 lat (usec): min=3595, max=19472, avg=11226.67, stdev=2535.00 00:10:08.026 clat percentiles (usec): 00:10:08.026 | 1.00th=[ 4752], 5.00th=[ 7963], 10.00th=[ 9503], 20.00th=[ 9896], 00:10:08.026 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:10:08.026 | 70.00th=[10945], 80.00th=[12780], 90.00th=[15270], 95.00th=[16581], 00:10:08.026 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268], 00:10:08.026 | 99.99th=[19530] 00:10:08.026 write: IOPS=6159, BW=24.1MiB/s (25.2MB/s)(24.2MiB/1006msec); 0 zone resets 00:10:08.026 slat (usec): min=2, max=8798, avg=65.86, stdev=308.67 00:10:08.026 clat (usec): min=1764, max=19117, avg=9540.31, stdev=2408.00 00:10:08.026 lat (usec): min=1776, max=19120, avg=9606.17, stdev=2423.55 00:10:08.026 clat percentiles (usec): 00:10:08.026 | 1.00th=[ 2802], 5.00th=[ 4621], 10.00th=[ 6063], 20.00th=[ 7504], 00:10:08.026 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:10:08.026 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11863], 95.00th=[12649], 00:10:08.026 | 99.00th=[15270], 99.50th=[15401], 99.90th=[18744], 99.95th=[18744], 00:10:08.026 | 99.99th=[19006] 00:10:08.026 bw ( KiB/s): min=24576, max=24576, per=34.34%, avg=24576.00, stdev= 0.00, samples=2 00:10:08.026 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:10:08.026 lat (msec) : 2=0.06%, 4=2.01%, 10=31.70%, 20=66.23% 00:10:08.026 cpu : usr=3.78%, sys=7.16%, ctx=735, majf=0, minf=1 00:10:08.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:08.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.026 issued rwts: total=6144,6196,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.026 job2: (groupid=0, jobs=1): err= 0: pid=1927724: Sun Oct 6 11:05:05 2024 00:10:08.026 read: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec) 00:10:08.026 slat (nsec): min=1389, max=12790k, avg=110675.76, stdev=833396.78 00:10:08.026 clat (usec): min=4773, max=32364, avg=13594.41, stdev=3445.09 00:10:08.026 lat (usec): min=4779, max=32366, avg=13705.08, stdev=3530.33 00:10:08.026 clat percentiles (usec): 00:10:08.026 | 1.00th=[ 6915], 5.00th=[10159], 10.00th=[11207], 20.00th=[11600], 00:10:08.026 | 30.00th=[11863], 40.00th=[12518], 50.00th=[12780], 60.00th=[13304], 00:10:08.026 | 70.00th=[13698], 80.00th=[14484], 90.00th=[17433], 95.00th=[20579], 00:10:08.026 | 99.00th=[28181], 99.50th=[31065], 99.90th=[32375], 99.95th=[32375], 00:10:08.026 | 99.99th=[32375] 00:10:08.026 write: IOPS=4760, BW=18.6MiB/s (19.5MB/s)(18.8MiB/1013msec); 0 zone resets 00:10:08.026 slat (usec): min=2, max=10239, avg=91.46, stdev=515.87 00:10:08.026 clat (usec): min=589, max=32362, avg=13637.54, stdev=6433.37 00:10:08.026 lat (usec): min=606, max=32366, avg=13729.00, stdev=6475.78 00:10:08.026 clat percentiles (usec): 00:10:08.026 | 1.00th=[ 3064], 5.00th=[ 5473], 10.00th=[ 7111], 20.00th=[ 8291], 00:10:08.026 | 30.00th=[10290], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:10:08.026 | 70.00th=[14615], 80.00th=[20841], 90.00th=[24773], 95.00th=[25822], 00:10:08.026 | 99.00th=[29230], 99.50th=[29492], 99.90th=[30278], 99.95th=[32375], 00:10:08.026 | 99.99th=[32375] 00:10:08.026 bw ( KiB/s): min=16136, max=21424, per=26.24%, avg=18780.00, stdev=3739.18, samples=2 00:10:08.026 iops : min= 4034, max= 5356, avg=4695.00, stdev=934.80, samples=2 00:10:08.026 lat (usec) : 750=0.03%, 1000=0.03% 00:10:08.026 lat (msec) : 2=0.18%, 4=0.89%, 10=14.87%, 20=70.52%, 50=13.48% 00:10:08.026 cpu : usr=4.74%, sys=5.43%, ctx=460, majf=0, minf=1 00:10:08.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:08.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.027 issued rwts: total=4608,4822,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.027 job3: (groupid=0, jobs=1): err= 0: pid=1927725: Sun Oct 6 11:05:05 2024 00:10:08.027 read: IOPS=4167, BW=16.3MiB/s (17.1MB/s)(16.5MiB/1011msec) 00:10:08.027 slat (nsec): min=1338, max=17473k, avg=113006.87, stdev=783590.25 00:10:08.027 clat (usec): min=4810, max=59899, avg=15381.06, stdev=9463.19 00:10:08.027 lat (usec): min=4816, max=60613, avg=15494.07, stdev=9528.24 00:10:08.027 clat percentiles (usec): 00:10:08.027 | 1.00th=[ 5014], 5.00th=[ 8717], 10.00th=[10159], 20.00th=[11207], 00:10:08.027 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12649], 00:10:08.027 | 70.00th=[13960], 80.00th=[17171], 90.00th=[20579], 95.00th=[38536], 00:10:08.027 | 99.00th=[56886], 99.50th=[59507], 99.90th=[60031], 99.95th=[60031], 00:10:08.027 | 99.99th=[60031] 00:10:08.027 write: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec); 0 zone resets 00:10:08.027 slat (usec): min=2, max=12120, avg=104.65, stdev=572.44 00:10:08.027 clat (usec): min=1008, max=35400, avg=13742.18, stdev=5634.73 00:10:08.027 lat (usec): min=1021, max=35404, avg=13846.84, stdev=5670.31 00:10:08.027 clat percentiles (usec): 00:10:08.027 | 1.00th=[ 3851], 5.00th=[ 6980], 10.00th=[ 7308], 20.00th=[10028], 00:10:08.027 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12911], 00:10:08.027 | 70.00th=[15008], 80.00th=[16712], 90.00th=[21890], 95.00th=[22152], 00:10:08.027 | 99.00th=[33817], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:10:08.027 | 99.99th=[35390] 00:10:08.027 bw ( KiB/s): min=12360, max=24416, per=25.69%, avg=18388.00, stdev=8524.88, samples=2 00:10:08.027 iops : min= 3090, max= 6104, avg=4597.00, stdev=2131.22, samples=2 00:10:08.027 lat (msec) : 2=0.29%, 4=0.29%, 10=14.22%, 20=71.38%, 50=12.11% 00:10:08.027 lat (msec) : 100=1.71% 00:10:08.027 cpu : usr=3.27%, sys=5.94%, ctx=412, majf=0, minf=1 00:10:08.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:08.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.027 issued rwts: total=4213,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.027 00:10:08.027 Run status group 0 (all jobs): 00:10:08.027 READ: bw=65.6MiB/s (68.8MB/s), 8127KiB/s-23.9MiB/s (8322kB/s-25.0MB/s), io=66.5MiB (69.7MB), run=1006-1013msec 00:10:08.027 WRITE: bw=69.9MiB/s (73.3MB/s), 9913KiB/s-24.1MiB/s (10.1MB/s-25.2MB/s), io=70.8MiB (74.2MB), run=1006-1013msec 00:10:08.027 00:10:08.027 Disk stats (read/write): 00:10:08.027 nvme0n1: ios=2097/2199, merge=0/0, ticks=24785/46893, in_queue=71678, util=90.08% 00:10:08.027 nvme0n2: ios=5160/5383, merge=0/0, ticks=55875/50319, in_queue=106194, util=96.24% 00:10:08.027 nvme0n3: ios=3886/4096, merge=0/0, ticks=50912/54749, in_queue=105661, util=93.45% 00:10:08.027 nvme0n4: ios=3465/3584, merge=0/0, ticks=22391/22889, in_queue=45280, util=95.50% 00:10:08.027 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:08.027 [global] 00:10:08.027 thread=1 00:10:08.027 invalidate=1 00:10:08.027 rw=randwrite 00:10:08.027 time_based=1 00:10:08.027 runtime=1 00:10:08.027 ioengine=libaio 00:10:08.027 direct=1 00:10:08.027 bs=4096 00:10:08.027 iodepth=128 00:10:08.027 norandommap=0 00:10:08.027 numjobs=1 00:10:08.027 00:10:08.027 verify_dump=1 00:10:08.027 verify_backlog=512 00:10:08.027 verify_state_save=0 00:10:08.027 do_verify=1 00:10:08.027 verify=crc32c-intel 00:10:08.027 [job0] 00:10:08.027 filename=/dev/nvme0n1 00:10:08.027 [job1] 00:10:08.027 filename=/dev/nvme0n2 00:10:08.027 [job2] 00:10:08.027 filename=/dev/nvme0n3 00:10:08.027 [job3] 00:10:08.027 filename=/dev/nvme0n4 00:10:08.027 Could not set queue depth (nvme0n1) 00:10:08.027 Could not set queue depth (nvme0n2) 00:10:08.027 Could not set queue depth (nvme0n3) 00:10:08.027 Could not set queue depth (nvme0n4) 00:10:08.284 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.284 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.284 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.284 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.284 fio-3.35 00:10:08.284 Starting 4 threads 00:10:09.656 00:10:09.656 job0: (groupid=0, jobs=1): err= 0: pid=1928091: Sun Oct 6 11:05:07 2024 00:10:09.656 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:10:09.656 slat (nsec): min=1071, max=29347k, avg=133527.92, stdev=915620.38 00:10:09.656 clat (usec): min=5054, max=58893, avg=17327.33, stdev=9905.68 00:10:09.656 lat (usec): min=5060, max=58900, avg=17460.86, stdev=9947.81 00:10:09.656 clat percentiles (usec): 00:10:09.656 | 1.00th=[ 5800], 5.00th=[ 7308], 10.00th=[ 9372], 20.00th=[10945], 00:10:09.656 | 30.00th=[11731], 40.00th=[12256], 50.00th=[13304], 60.00th=[15401], 00:10:09.656 | 70.00th=[19530], 80.00th=[23200], 90.00th=[28181], 95.00th=[40109], 00:10:09.656 | 99.00th=[55837], 99.50th=[58459], 99.90th=[58983], 99.95th=[58983], 00:10:09.656 | 99.99th=[58983] 00:10:09.656 write: IOPS=3928, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1003msec); 0 zone resets 00:10:09.656 slat (nsec): min=1719, max=9875.3k, avg=127727.69, stdev=703562.72 00:10:09.656 clat (usec): min=1045, max=45164, avg=16502.95, stdev=6439.74 00:10:09.656 lat (usec): min=3555, max=50847, avg=16630.68, stdev=6472.33 00:10:09.656 clat percentiles (usec): 00:10:09.656 | 1.00th=[ 7242], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[11863], 00:10:09.656 | 30.00th=[12649], 40.00th=[13698], 50.00th=[14746], 60.00th=[16581], 00:10:09.656 | 70.00th=[17695], 80.00th=[21365], 90.00th=[24773], 95.00th=[28705], 00:10:09.656 | 99.00th=[40633], 99.50th=[44827], 99.90th=[45351], 99.95th=[45351], 00:10:09.656 | 99.99th=[45351] 00:10:09.656 bw ( KiB/s): min=14112, max=16384, per=23.06%, avg=15248.00, stdev=1606.55, samples=2 00:10:09.656 iops : min= 3528, max= 4096, avg=3812.00, stdev=401.64, samples=2 00:10:09.656 lat (msec) : 2=0.01%, 4=0.08%, 10=11.16%, 20=63.84%, 50=24.02% 00:10:09.656 lat (msec) : 100=0.89% 00:10:09.656 cpu : usr=2.20%, sys=3.89%, ctx=370, majf=0, minf=1 00:10:09.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:09.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:09.657 issued rwts: total=3584,3940,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:09.657 job1: (groupid=0, jobs=1): err= 0: pid=1928092: Sun Oct 6 11:05:07 2024 00:10:09.657 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:10:09.657 slat (nsec): min=1120, max=58314k, avg=201078.45, stdev=1953337.74 00:10:09.657 clat (msec): min=2, max=150, avg=27.40, stdev=31.48 00:10:09.657 lat (msec): min=2, max=150, avg=27.60, stdev=31.65 00:10:09.657 clat percentiles (msec): 00:10:09.657 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 10], 20.00th=[ 12], 00:10:09.657 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 16], 00:10:09.657 | 70.00th=[ 18], 80.00th=[ 33], 90.00th=[ 82], 95.00th=[ 99], 00:10:09.657 | 99.00th=[ 150], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 150], 00:10:09.657 | 99.99th=[ 150] 00:10:09.657 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1003msec); 0 zone resets 00:10:09.657 slat (nsec): min=1890, max=31253k, avg=119041.54, stdev=889658.39 00:10:09.657 clat (usec): min=689, max=35323, avg=13732.80, stdev=5816.50 00:10:09.657 lat (usec): min=2837, max=61712, avg=13851.85, stdev=5891.76 00:10:09.657 clat percentiles (usec): 00:10:09.657 | 1.00th=[ 6390], 5.00th=[ 8160], 10.00th=[ 8848], 20.00th=[ 9634], 00:10:09.657 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11731], 60.00th=[12387], 00:10:09.657 | 70.00th=[14877], 80.00th=[17171], 90.00th=[22676], 95.00th=[25035], 00:10:09.657 | 99.00th=[32900], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:10:09.657 | 99.99th=[35390] 00:10:09.657 bw ( KiB/s): min= 8616, max=15960, per=18.58%, avg=12288.00, stdev=5192.99, samples=2 00:10:09.657 iops : min= 2154, max= 3990, avg=3072.00, stdev=1298.25, samples=2 00:10:09.657 lat (usec) : 750=0.02% 00:10:09.657 lat (msec) : 4=0.52%, 10=18.05%, 20=60.33%, 50=12.28%, 100=6.76% 00:10:09.657 lat (msec) : 250=2.05% 00:10:09.657 cpu : usr=1.70%, sys=3.29%, ctx=272, majf=0, minf=1 00:10:09.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:09.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:09.657 issued rwts: total=3072,3078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:09.657 job2: (groupid=0, jobs=1): err= 0: pid=1928093: Sun Oct 6 11:05:07 2024 00:10:09.657 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:09.657 slat (nsec): min=1129, max=19975k, avg=113830.74, stdev=787669.16 00:10:09.657 clat (usec): min=4476, max=63419, avg=14406.59, stdev=6022.07 00:10:09.657 lat (usec): min=4484, max=65129, avg=14520.42, stdev=6070.86 00:10:09.657 clat percentiles (usec): 00:10:09.657 | 1.00th=[ 6456], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10814], 00:10:09.657 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12518], 60.00th=[13304], 00:10:09.657 | 70.00th=[15008], 80.00th=[17433], 90.00th=[21890], 95.00th=[26870], 00:10:09.657 | 99.00th=[34341], 99.50th=[38536], 99.90th=[57934], 99.95th=[57934], 00:10:09.657 | 99.99th=[63177] 00:10:09.657 write: IOPS=4939, BW=19.3MiB/s (20.2MB/s)(19.4MiB/1003msec); 0 zone resets 00:10:09.657 slat (usec): min=2, max=8701, avg=89.88, stdev=537.80 00:10:09.657 clat (usec): min=2395, max=27631, avg=12030.77, stdev=4629.44 00:10:09.657 lat (usec): min=2404, max=27637, avg=12120.64, stdev=4651.60 00:10:09.657 clat percentiles (usec): 00:10:09.657 | 1.00th=[ 3720], 5.00th=[ 6587], 10.00th=[ 7439], 20.00th=[ 8848], 00:10:09.657 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[11076], 60.00th=[12256], 00:10:09.657 | 70.00th=[13304], 80.00th=[14746], 90.00th=[19268], 95.00th=[22152], 00:10:09.657 | 99.00th=[26084], 99.50th=[27132], 99.90th=[27657], 99.95th=[27657], 00:10:09.657 | 99.99th=[27657] 00:10:09.657 bw ( KiB/s): min=17368, max=21248, per=29.20%, avg=19308.00, stdev=2743.57, samples=2 00:10:09.657 iops : min= 4342, max= 5312, avg=4827.00, stdev=685.89, samples=2 00:10:09.657 lat (msec) : 4=1.16%, 10=27.14%, 20=61.68%, 50=9.89%, 100=0.13% 00:10:09.657 cpu : usr=2.30%, sys=5.59%, ctx=419, majf=0, minf=1 00:10:09.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:09.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:09.657 issued rwts: total=4608,4954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:09.657 job3: (groupid=0, jobs=1): err= 0: pid=1928094: Sun Oct 6 11:05:07 2024 00:10:09.657 read: IOPS=4586, BW=17.9MiB/s (18.8MB/s)(17.9MiB/1001msec) 00:10:09.657 slat (nsec): min=1136, max=22512k, avg=107974.57, stdev=728406.08 00:10:09.657 clat (usec): min=596, max=61795, avg=14461.95, stdev=7439.37 00:10:09.657 lat (usec): min=3115, max=61801, avg=14569.93, stdev=7468.32 00:10:09.657 clat percentiles (usec): 00:10:09.657 | 1.00th=[ 3294], 5.00th=[ 8717], 10.00th=[10028], 20.00th=[11076], 00:10:09.657 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:10:09.657 | 70.00th=[14091], 80.00th=[15139], 90.00th=[20055], 95.00th=[25560], 00:10:09.657 | 99.00th=[54264], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:10:09.657 | 99.99th=[61604] 00:10:09.657 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:10:09.657 slat (nsec): min=1901, max=8954.3k, avg=98511.92, stdev=510808.74 00:10:09.657 clat (usec): min=4368, max=23492, avg=12946.36, stdev=3271.96 00:10:09.657 lat (usec): min=4377, max=23511, avg=13044.88, stdev=3292.58 00:10:09.657 clat percentiles (usec): 00:10:09.657 | 1.00th=[ 6390], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10945], 00:10:09.657 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11994], 60.00th=[12780], 00:10:09.657 | 70.00th=[13435], 80.00th=[15795], 90.00th=[17433], 95.00th=[19530], 00:10:09.657 | 99.00th=[21627], 99.50th=[21890], 99.90th=[23462], 99.95th=[23462], 00:10:09.657 | 99.99th=[23462] 00:10:09.657 bw ( KiB/s): min=18480, max=18480, per=27.95%, avg=18480.00, stdev= 0.00, samples=1 00:10:09.657 iops : min= 4620, max= 4620, avg=4620.00, stdev= 0.00, samples=1 00:10:09.657 lat (usec) : 750=0.01% 00:10:09.657 lat (msec) : 4=0.76%, 10=10.78%, 20=81.10%, 50=6.66%, 100=0.68% 00:10:09.657 cpu : usr=2.80%, sys=4.80%, ctx=466, majf=0, minf=1 00:10:09.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:09.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:09.657 issued rwts: total=4591,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:09.657 00:10:09.657 Run status group 0 (all jobs): 00:10:09.657 READ: bw=61.7MiB/s (64.7MB/s), 12.0MiB/s-17.9MiB/s (12.5MB/s-18.8MB/s), io=61.9MiB (64.9MB), run=1001-1003msec 00:10:09.657 WRITE: bw=64.6MiB/s (67.7MB/s), 12.0MiB/s-19.3MiB/s (12.6MB/s-20.2MB/s), io=64.8MiB (67.9MB), run=1001-1003msec 00:10:09.657 00:10:09.657 Disk stats (read/write): 00:10:09.657 nvme0n1: ios=3122/3356, merge=0/0, ticks=19440/16016, in_queue=35456, util=85.37% 00:10:09.657 nvme0n2: ios=2531/2560, merge=0/0, ticks=22137/10609, in_queue=32746, util=99.39% 00:10:09.657 nvme0n3: ios=4105/4103, merge=0/0, ticks=34978/28061, in_queue=63039, util=92.61% 00:10:09.657 nvme0n4: ios=3654/4096, merge=0/0, ticks=24208/19794, in_queue=44002, util=98.53% 00:10:09.657 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:09.657 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1928320 00:10:09.657 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:09.657 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:09.657 [global] 00:10:09.657 thread=1 00:10:09.657 invalidate=1 00:10:09.657 rw=read 00:10:09.657 time_based=1 00:10:09.657 runtime=10 00:10:09.657 ioengine=libaio 00:10:09.657 direct=1 00:10:09.657 bs=4096 00:10:09.657 iodepth=1 00:10:09.657 norandommap=1 00:10:09.657 numjobs=1 00:10:09.657 00:10:09.657 [job0] 00:10:09.657 filename=/dev/nvme0n1 00:10:09.657 [job1] 00:10:09.657 filename=/dev/nvme0n2 00:10:09.657 [job2] 00:10:09.657 filename=/dev/nvme0n3 00:10:09.657 [job3] 00:10:09.657 filename=/dev/nvme0n4 00:10:09.657 Could not set queue depth (nvme0n1) 00:10:09.657 Could not set queue depth (nvme0n2) 00:10:09.657 Could not set queue depth (nvme0n3) 00:10:09.657 Could not set queue depth (nvme0n4) 00:10:09.914 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.914 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.914 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.914 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.914 fio-3.35 00:10:09.914 Starting 4 threads 00:10:13.199 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:13.199 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=385024, buflen=4096 00:10:13.199 fio: pid=1928466, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:13.199 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:13.199 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:13.199 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:13.199 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9564160, buflen=4096 00:10:13.199 fio: pid=1928465, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:13.199 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=45035520, buflen=4096 00:10:13.199 fio: pid=1928463, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:13.199 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:13.199 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:13.458 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=598016, buflen=4096 00:10:13.458 fio: pid=1928464, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:10:13.458 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:13.458 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:13.458 00:10:13.458 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1928463: Sun Oct 6 11:05:10 2024 00:10:13.458 read: IOPS=3456, BW=13.5MiB/s (14.2MB/s)(42.9MiB/3181msec) 00:10:13.458 slat (usec): min=6, max=35230, avg=12.92, stdev=367.17 00:10:13.458 clat (usec): min=202, max=3792, avg=272.33, stdev=53.53 00:10:13.458 lat (usec): min=212, max=35664, avg=285.25, stdev=372.55 00:10:13.458 clat percentiles (usec): 00:10:13.458 | 1.00th=[ 231], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 255], 00:10:13.458 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 265], 60.00th=[ 269], 00:10:13.458 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 322], 00:10:13.458 | 99.00th=[ 396], 99.50th=[ 441], 99.90th=[ 474], 99.95th=[ 506], 00:10:13.458 | 99.99th=[ 3326] 00:10:13.458 bw ( KiB/s): min=13096, max=14504, per=87.17%, avg=13941.17, stdev=508.86, samples=6 00:10:13.458 iops : min= 3274, max= 3626, avg=3485.17, stdev=127.31, samples=6 00:10:13.458 lat (usec) : 250=12.42%, 500=87.50%, 750=0.05% 00:10:13.458 lat (msec) : 4=0.02% 00:10:13.458 cpu : usr=2.30%, sys=5.25%, ctx=11000, majf=0, minf=1 00:10:13.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.458 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.458 issued rwts: total=10996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.458 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1928464: Sun Oct 6 11:05:10 2024 00:10:13.458 read: IOPS=43, BW=172KiB/s (176kB/s)(584KiB/3394msec) 00:10:13.458 slat (usec): min=8, max=20058, avg=281.33, stdev=1992.85 00:10:13.458 clat (usec): min=291, max=42009, avg=22953.73, stdev=20259.00 00:10:13.458 lat (usec): min=302, max=61089, avg=23189.12, stdev=20550.79 00:10:13.458 clat percentiles (usec): 00:10:13.458 | 1.00th=[ 302], 5.00th=[ 338], 10.00th=[ 359], 20.00th=[ 383], 00:10:13.458 | 30.00th=[ 457], 40.00th=[ 529], 50.00th=[40633], 60.00th=[41157], 00:10:13.458 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:13.458 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:13.458 | 99.99th=[42206] 00:10:13.458 bw ( KiB/s): min= 93, max= 600, per=1.14%, avg=182.17, stdev=204.75, samples=6 00:10:13.458 iops : min= 23, max= 150, avg=45.50, stdev=51.21, samples=6 00:10:13.458 lat (usec) : 500=34.69%, 750=9.52% 00:10:13.458 lat (msec) : 50=55.10% 00:10:13.458 cpu : usr=0.00%, sys=0.35%, ctx=151, majf=0, minf=2 00:10:13.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.458 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.458 issued rwts: total=147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.458 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1928465: Sun Oct 6 11:05:10 2024 00:10:13.458 read: IOPS=780, BW=3121KiB/s (3196kB/s)(9340KiB/2993msec) 00:10:13.458 slat (usec): min=6, max=688, avg= 9.78, stdev=14.35 00:10:13.458 clat (usec): min=285, max=42040, avg=1265.83, stdev=5977.26 00:10:13.458 lat (usec): min=292, max=42054, avg=1275.60, stdev=5981.05 00:10:13.458 clat percentiles (usec): 00:10:13.458 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 326], 00:10:13.458 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 363], 60.00th=[ 379], 00:10:13.458 | 70.00th=[ 396], 80.00th=[ 412], 90.00th=[ 469], 95.00th=[ 502], 00:10:13.458 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:13.458 | 99.99th=[42206] 00:10:13.458 bw ( KiB/s): min= 96, max= 9944, per=17.61%, avg=2816.00, stdev=4095.39, samples=5 00:10:13.458 iops : min= 24, max= 2486, avg=704.00, stdev=1023.85, samples=5 00:10:13.458 lat (usec) : 500=94.91%, 750=2.78%, 1000=0.04% 00:10:13.458 lat (msec) : 4=0.04%, 50=2.18% 00:10:13.458 cpu : usr=0.43%, sys=1.24%, ctx=2339, majf=0, minf=2 00:10:13.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.458 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.458 issued rwts: total=2336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.458 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1928466: Sun Oct 6 11:05:10 2024 00:10:13.458 read: IOPS=34, BW=137KiB/s (140kB/s)(376KiB/2743msec) 00:10:13.458 slat (nsec): min=5257, max=33913, avg=12491.41, stdev=5956.27 00:10:13.458 clat (usec): min=248, max=42010, avg=28902.79, stdev=18707.00 00:10:13.458 lat (usec): min=254, max=42020, avg=28915.30, stdev=18709.81 00:10:13.458 clat percentiles (usec): 00:10:13.458 | 1.00th=[ 249], 5.00th=[ 258], 10.00th=[ 273], 20.00th=[ 326], 00:10:13.458 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:13.458 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:13.458 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:13.458 | 99.99th=[42206] 00:10:13.458 bw ( KiB/s): min= 96, max= 312, per=0.88%, avg=140.80, stdev=95.77, samples=5 00:10:13.458 iops : min= 24, max= 78, avg=35.20, stdev=23.94, samples=5 00:10:13.458 lat (usec) : 250=1.05%, 500=27.37%, 1000=1.05% 00:10:13.458 lat (msec) : 50=69.47% 00:10:13.458 cpu : usr=0.00%, sys=0.11%, ctx=95, majf=0, minf=2 00:10:13.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.458 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.458 issued rwts: total=95,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.458 00:10:13.458 Run status group 0 (all jobs): 00:10:13.458 READ: bw=15.6MiB/s (16.4MB/s), 137KiB/s-13.5MiB/s (140kB/s-14.2MB/s), io=53.0MiB (55.6MB), run=2743-3394msec 00:10:13.458 00:10:13.458 Disk stats (read/write): 00:10:13.458 nvme0n1: ios=10836/0, merge=0/0, ticks=3631/0, in_queue=3631, util=98.27% 00:10:13.458 nvme0n2: ios=180/0, merge=0/0, ticks=4222/0, in_queue=4222, util=98.25% 00:10:13.458 nvme0n3: ios=2360/0, merge=0/0, ticks=3232/0, in_queue=3232, util=100.00% 00:10:13.458 nvme0n4: ios=91/0, merge=0/0, ticks=2595/0, in_queue=2595, util=96.48% 00:10:13.717 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:13.717 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:13.975 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:13.975 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:14.232 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:14.232 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:14.232 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:14.232 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:14.490 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:14.490 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1928320 00:10:14.490 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:14.490 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:14.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.490 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:14.490 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:14.490 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:14.490 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.490 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:14.490 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.490 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:14.490 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:14.490 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:14.490 nvmf hotplug test: fio failed as expected 00:10:14.490 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:14.748 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:14.748 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:14.748 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:14.748 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:14.748 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:14.748 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:14.748 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:14.748 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:14.748 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:14.748 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:14.748 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:14.748 rmmod nvme_tcp 00:10:14.748 rmmod nvme_fabrics 00:10:15.006 rmmod nvme_keyring 00:10:15.006 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:15.006 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:15.006 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:15.006 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1925454 ']' 00:10:15.006 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1925454 00:10:15.006 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1925454 ']' 00:10:15.006 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1925454 00:10:15.006 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:15.006 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.006 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1925454 00:10:15.006 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:15.006 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:15.006 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1925454' 00:10:15.006 killing process with pid 1925454 00:10:15.006 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1925454 00:10:15.006 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1925454 00:10:15.263 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:15.263 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:15.263 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:15.263 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:15.263 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:15.263 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:10:15.263 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:10:15.263 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.263 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:15.263 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.263 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.263 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.171 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:17.171 00:10:17.171 real 0m26.398s 00:10:17.171 user 1m46.816s 00:10:17.171 sys 0m7.937s 00:10:17.171 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.171 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.171 ************************************ 00:10:17.171 END TEST nvmf_fio_target 00:10:17.171 ************************************ 00:10:17.171 11:05:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:17.171 11:05:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:17.171 11:05:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.171 11:05:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:17.431 ************************************ 00:10:17.431 START TEST nvmf_bdevio 00:10:17.431 ************************************ 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:17.431 * Looking for test storage... 00:10:17.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:17.431 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:17.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.432 --rc genhtml_branch_coverage=1 00:10:17.432 --rc genhtml_function_coverage=1 00:10:17.432 --rc genhtml_legend=1 00:10:17.432 --rc geninfo_all_blocks=1 00:10:17.432 --rc geninfo_unexecuted_blocks=1 00:10:17.432 00:10:17.432 ' 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:17.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.432 --rc genhtml_branch_coverage=1 00:10:17.432 --rc genhtml_function_coverage=1 00:10:17.432 --rc genhtml_legend=1 00:10:17.432 --rc geninfo_all_blocks=1 00:10:17.432 --rc geninfo_unexecuted_blocks=1 00:10:17.432 00:10:17.432 ' 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:17.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.432 --rc genhtml_branch_coverage=1 00:10:17.432 --rc genhtml_function_coverage=1 00:10:17.432 --rc genhtml_legend=1 00:10:17.432 --rc geninfo_all_blocks=1 00:10:17.432 --rc geninfo_unexecuted_blocks=1 00:10:17.432 00:10:17.432 ' 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:17.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.432 --rc genhtml_branch_coverage=1 00:10:17.432 --rc genhtml_function_coverage=1 00:10:17.432 --rc genhtml_legend=1 00:10:17.432 --rc geninfo_all_blocks=1 00:10:17.432 --rc geninfo_unexecuted_blocks=1 00:10:17.432 00:10:17.432 ' 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:17.432 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:22.705 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:22.705 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:22.705 Found net devices under 0000:af:00.0: cvl_0_0 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:22.705 Found net devices under 0000:af:00.1: cvl_0_1 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:22.705 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:22.706 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:22.706 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:22.706 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:22.706 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.706 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:22.706 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:22.706 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:22.706 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:22.706 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:22.706 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:22.706 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:22.706 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:22.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:10:22.706 00:10:22.706 --- 10.0.0.2 ping statistics --- 00:10:22.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.706 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:22.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:10:22.706 00:10:22.706 --- 10.0.0.1 ping statistics --- 00:10:22.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.706 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1932632 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1932632 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1932632 ']' 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.706 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.706 [2024-10-06 11:05:20.147427] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:10:22.706 [2024-10-06 11:05:20.147470] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.706 [2024-10-06 11:05:20.206650] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.706 [2024-10-06 11:05:20.245776] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.706 [2024-10-06 11:05:20.245817] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.706 [2024-10-06 11:05:20.245824] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.706 [2024-10-06 11:05:20.245831] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.706 [2024-10-06 11:05:20.245836] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.706 [2024-10-06 11:05:20.247430] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:10:22.706 [2024-10-06 11:05:20.247541] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:10:22.706 [2024-10-06 11:05:20.247628] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.706 [2024-10-06 11:05:20.247629] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.966 [2024-10-06 11:05:20.389987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.966 Malloc0 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.966 [2024-10-06 11:05:20.433239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:22.966 { 00:10:22.966 "params": { 00:10:22.966 "name": "Nvme$subsystem", 00:10:22.966 "trtype": "$TEST_TRANSPORT", 00:10:22.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:22.966 "adrfam": "ipv4", 00:10:22.966 "trsvcid": "$NVMF_PORT", 00:10:22.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:22.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:22.966 "hdgst": ${hdgst:-false}, 00:10:22.966 "ddgst": ${ddgst:-false} 00:10:22.966 }, 00:10:22.966 "method": "bdev_nvme_attach_controller" 00:10:22.966 } 00:10:22.966 EOF 00:10:22.966 )") 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:10:22.966 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:22.966 "params": { 00:10:22.966 "name": "Nvme1", 00:10:22.966 "trtype": "tcp", 00:10:22.966 "traddr": "10.0.0.2", 00:10:22.966 "adrfam": "ipv4", 00:10:22.966 "trsvcid": "4420", 00:10:22.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:22.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:22.966 "hdgst": false, 00:10:22.966 "ddgst": false 00:10:22.966 }, 00:10:22.966 "method": "bdev_nvme_attach_controller" 00:10:22.966 }' 00:10:22.966 [2024-10-06 11:05:20.482123] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:10:22.966 [2024-10-06 11:05:20.482173] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1932800 ] 00:10:23.225 [2024-10-06 11:05:20.541094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:23.225 [2024-10-06 11:05:20.582573] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.225 [2024-10-06 11:05:20.582669] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.225 [2024-10-06 11:05:20.582671] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.225 I/O targets: 00:10:23.225 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:23.225 00:10:23.225 00:10:23.225 CUnit - A unit testing framework for C - Version 2.1-3 00:10:23.225 http://cunit.sourceforge.net/ 00:10:23.225 00:10:23.225 00:10:23.225 Suite: bdevio tests on: Nvme1n1 00:10:23.225 Test: blockdev write read block ...passed 00:10:23.483 Test: blockdev write zeroes read block ...passed 00:10:23.483 Test: blockdev write zeroes read no split ...passed 00:10:23.483 Test: blockdev write zeroes read split ...passed 00:10:23.483 Test: blockdev write zeroes read split partial ...passed 00:10:23.483 Test: blockdev reset ...[2024-10-06 11:05:20.941842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:23.483 [2024-10-06 11:05:20.941901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x560bd0 (9): Bad file descriptor 00:10:23.483 [2024-10-06 11:05:20.955026] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:23.483 passed 00:10:23.483 Test: blockdev write read 8 blocks ...passed 00:10:23.483 Test: blockdev write read size > 128k ...passed 00:10:23.483 Test: blockdev write read invalid size ...passed 00:10:23.483 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:23.483 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:23.483 Test: blockdev write read max offset ...passed 00:10:23.741 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:23.741 Test: blockdev writev readv 8 blocks ...passed 00:10:23.741 Test: blockdev writev readv 30 x 1block ...passed 00:10:23.741 Test: blockdev writev readv block ...passed 00:10:23.741 Test: blockdev writev readv size > 128k ...passed 00:10:23.741 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:23.741 Test: blockdev comparev and writev ...[2024-10-06 11:05:21.167355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.741 [2024-10-06 11:05:21.167387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:23.741 [2024-10-06 11:05:21.167401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.741 [2024-10-06 11:05:21.167409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:23.741 [2024-10-06 11:05:21.167688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.741 [2024-10-06 11:05:21.167699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:23.741 [2024-10-06 11:05:21.167710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.741 [2024-10-06 11:05:21.167717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:23.741 [2024-10-06 11:05:21.167989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.741 [2024-10-06 11:05:21.167999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:23.741 [2024-10-06 11:05:21.168011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.741 [2024-10-06 11:05:21.168018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:23.741 [2024-10-06 11:05:21.168287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.741 [2024-10-06 11:05:21.168299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:23.741 [2024-10-06 11:05:21.168310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.741 [2024-10-06 11:05:21.168318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:23.741 passed 00:10:23.741 Test: blockdev nvme passthru rw ...passed 00:10:23.741 Test: blockdev nvme passthru vendor specific ...[2024-10-06 11:05:21.250488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.741 [2024-10-06 11:05:21.250503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:23.741 [2024-10-06 11:05:21.250648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.741 [2024-10-06 11:05:21.250658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:23.741 [2024-10-06 11:05:21.250795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.741 [2024-10-06 11:05:21.250804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:23.741 [2024-10-06 11:05:21.250945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.741 [2024-10-06 11:05:21.250958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:23.741 passed 00:10:23.741 Test: blockdev nvme admin passthru ...passed 00:10:23.741 Test: blockdev copy ...passed 00:10:23.741 00:10:23.741 Run Summary: Type Total Ran Passed Failed Inactive 00:10:23.741 suites 1 1 n/a 0 0 00:10:23.742 tests 23 23 23 0 0 00:10:23.742 asserts 152 152 152 0 n/a 00:10:23.742 00:10:23.742 Elapsed time = 1.139 seconds 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:24.000 rmmod nvme_tcp 00:10:24.000 rmmod nvme_fabrics 00:10:24.000 rmmod nvme_keyring 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1932632 ']' 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1932632 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1932632 ']' 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1932632 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:24.000 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1932632 00:10:24.259 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:24.259 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:24.259 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1932632' 00:10:24.259 killing process with pid 1932632 00:10:24.259 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1932632 00:10:24.259 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1932632 00:10:24.259 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:24.259 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:24.259 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:24.259 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:24.259 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:10:24.259 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:24.260 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:10:24.260 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:24.260 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:24.260 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.260 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.260 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.795 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:26.795 00:10:26.795 real 0m9.119s 00:10:26.795 user 0m9.277s 00:10:26.795 sys 0m4.445s 00:10:26.795 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.795 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.795 ************************************ 00:10:26.795 END TEST nvmf_bdevio 00:10:26.795 ************************************ 00:10:26.795 11:05:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:26.795 00:10:26.795 real 4m27.785s 00:10:26.795 user 10m13.126s 00:10:26.795 sys 1m32.622s 00:10:26.795 11:05:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.795 11:05:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:26.795 ************************************ 00:10:26.795 END TEST nvmf_target_core 00:10:26.795 ************************************ 00:10:26.795 11:05:23 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:26.795 11:05:23 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:26.795 11:05:23 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.795 11:05:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:26.795 ************************************ 00:10:26.795 START TEST nvmf_target_extra 00:10:26.795 ************************************ 00:10:26.795 11:05:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:26.795 * Looking for test storage... 00:10:26.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:26.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.795 --rc genhtml_branch_coverage=1 00:10:26.795 --rc genhtml_function_coverage=1 00:10:26.795 --rc genhtml_legend=1 00:10:26.795 --rc geninfo_all_blocks=1 00:10:26.795 --rc geninfo_unexecuted_blocks=1 00:10:26.795 00:10:26.795 ' 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:26.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.795 --rc genhtml_branch_coverage=1 00:10:26.795 --rc genhtml_function_coverage=1 00:10:26.795 --rc genhtml_legend=1 00:10:26.795 --rc geninfo_all_blocks=1 00:10:26.795 --rc geninfo_unexecuted_blocks=1 00:10:26.795 00:10:26.795 ' 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:26.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.795 --rc genhtml_branch_coverage=1 00:10:26.795 --rc genhtml_function_coverage=1 00:10:26.795 --rc genhtml_legend=1 00:10:26.795 --rc geninfo_all_blocks=1 00:10:26.795 --rc geninfo_unexecuted_blocks=1 00:10:26.795 00:10:26.795 ' 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:26.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.795 --rc genhtml_branch_coverage=1 00:10:26.795 --rc genhtml_function_coverage=1 00:10:26.795 --rc genhtml_legend=1 00:10:26.795 --rc geninfo_all_blocks=1 00:10:26.795 --rc geninfo_unexecuted_blocks=1 00:10:26.795 00:10:26.795 ' 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:26.795 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:26.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:26.796 ************************************ 00:10:26.796 START TEST nvmf_example 00:10:26.796 ************************************ 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:26.796 * Looking for test storage... 00:10:26.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:26.796 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:27.055 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:27.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.056 --rc genhtml_branch_coverage=1 00:10:27.056 --rc genhtml_function_coverage=1 00:10:27.056 --rc genhtml_legend=1 00:10:27.056 --rc geninfo_all_blocks=1 00:10:27.056 --rc geninfo_unexecuted_blocks=1 00:10:27.056 00:10:27.056 ' 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:27.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.056 --rc genhtml_branch_coverage=1 00:10:27.056 --rc genhtml_function_coverage=1 00:10:27.056 --rc genhtml_legend=1 00:10:27.056 --rc geninfo_all_blocks=1 00:10:27.056 --rc geninfo_unexecuted_blocks=1 00:10:27.056 00:10:27.056 ' 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:27.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.056 --rc genhtml_branch_coverage=1 00:10:27.056 --rc genhtml_function_coverage=1 00:10:27.056 --rc genhtml_legend=1 00:10:27.056 --rc geninfo_all_blocks=1 00:10:27.056 --rc geninfo_unexecuted_blocks=1 00:10:27.056 00:10:27.056 ' 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:27.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.056 --rc genhtml_branch_coverage=1 00:10:27.056 --rc genhtml_function_coverage=1 00:10:27.056 --rc genhtml_legend=1 00:10:27.056 --rc geninfo_all_blocks=1 00:10:27.056 --rc geninfo_unexecuted_blocks=1 00:10:27.056 00:10:27.056 ' 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:27.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:27.056 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:32.328 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.328 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:32.328 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:32.329 Found net devices under 0000:af:00.0: cvl_0_0 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:32.329 Found net devices under 0000:af:00.1: cvl_0_1 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:32.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:10:32.329 00:10:32.329 --- 10.0.0.2 ping statistics --- 00:10:32.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.329 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:10:32.329 00:10:32.329 --- 10.0.0.1 ping statistics --- 00:10:32.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.329 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1936464 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1936464 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1936464 ']' 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:32.329 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.268 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:33.527 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.527 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.527 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.527 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:33.527 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.527 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:33.527 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:43.511 Initializing NVMe Controllers 00:10:43.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:43.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:43.511 Initialization complete. Launching workers. 00:10:43.511 ======================================================== 00:10:43.511 Latency(us) 00:10:43.511 Device Information : IOPS MiB/s Average min max 00:10:43.511 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18258.43 71.32 3504.63 651.08 15450.00 00:10:43.511 ======================================================== 00:10:43.511 Total : 18258.43 71.32 3504.63 651.08 15450.00 00:10:43.511 00:10:43.511 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:43.511 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:43.511 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:43.511 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:43.511 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:43.511 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:43.511 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.511 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:43.511 rmmod nvme_tcp 00:10:43.511 rmmod nvme_fabrics 00:10:43.511 rmmod nvme_keyring 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 1936464 ']' 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 1936464 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1936464 ']' 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1936464 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1936464 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1936464' 00:10:43.771 killing process with pid 1936464 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1936464 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1936464 00:10:43.771 nvmf threads initialize successfully 00:10:43.771 bdev subsystem init successfully 00:10:43.771 created a nvmf target service 00:10:43.771 create targets's poll groups done 00:10:43.771 all subsystems of target started 00:10:43.771 nvmf target is running 00:10:43.771 all subsystems of target stopped 00:10:43.771 destroy targets's poll groups done 00:10:43.771 destroyed the nvmf target service 00:10:43.771 bdev subsystem finish successfully 00:10:43.771 nvmf threads destroy successfully 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:43.771 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:10:44.031 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:44.031 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:44.031 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.031 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.031 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.938 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:45.938 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:45.938 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:45.938 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:45.938 00:10:45.938 real 0m19.247s 00:10:45.938 user 0m46.006s 00:10:45.938 sys 0m5.598s 00:10:45.938 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.938 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:45.938 ************************************ 00:10:45.938 END TEST nvmf_example 00:10:45.938 ************************************ 00:10:45.938 11:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:45.938 11:05:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:45.938 11:05:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.938 11:05:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:46.199 ************************************ 00:10:46.199 START TEST nvmf_filesystem 00:10:46.199 ************************************ 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:46.199 * Looking for test storage... 00:10:46.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:46.199 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:46.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.200 --rc genhtml_branch_coverage=1 00:10:46.200 --rc genhtml_function_coverage=1 00:10:46.200 --rc genhtml_legend=1 00:10:46.200 --rc geninfo_all_blocks=1 00:10:46.200 --rc geninfo_unexecuted_blocks=1 00:10:46.200 00:10:46.200 ' 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:46.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.200 --rc genhtml_branch_coverage=1 00:10:46.200 --rc genhtml_function_coverage=1 00:10:46.200 --rc genhtml_legend=1 00:10:46.200 --rc geninfo_all_blocks=1 00:10:46.200 --rc geninfo_unexecuted_blocks=1 00:10:46.200 00:10:46.200 ' 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:46.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.200 --rc genhtml_branch_coverage=1 00:10:46.200 --rc genhtml_function_coverage=1 00:10:46.200 --rc genhtml_legend=1 00:10:46.200 --rc geninfo_all_blocks=1 00:10:46.200 --rc geninfo_unexecuted_blocks=1 00:10:46.200 00:10:46.200 ' 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:46.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.200 --rc genhtml_branch_coverage=1 00:10:46.200 --rc genhtml_function_coverage=1 00:10:46.200 --rc genhtml_legend=1 00:10:46.200 --rc geninfo_all_blocks=1 00:10:46.200 --rc geninfo_unexecuted_blocks=1 00:10:46.200 00:10:46.200 ' 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:46.200 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:46.201 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:46.201 #define SPDK_CONFIG_H 00:10:46.201 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:46.201 #define SPDK_CONFIG_APPS 1 00:10:46.201 #define SPDK_CONFIG_ARCH native 00:10:46.201 #undef SPDK_CONFIG_ASAN 00:10:46.201 #undef SPDK_CONFIG_AVAHI 00:10:46.201 #undef SPDK_CONFIG_CET 00:10:46.201 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:46.201 #define SPDK_CONFIG_COVERAGE 1 00:10:46.201 #define SPDK_CONFIG_CROSS_PREFIX 00:10:46.201 #undef SPDK_CONFIG_CRYPTO 00:10:46.201 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:46.201 #undef SPDK_CONFIG_CUSTOMOCF 00:10:46.201 #undef SPDK_CONFIG_DAOS 00:10:46.201 #define SPDK_CONFIG_DAOS_DIR 00:10:46.201 #define SPDK_CONFIG_DEBUG 1 00:10:46.201 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:46.201 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:46.201 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:46.201 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:46.201 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:46.201 #undef SPDK_CONFIG_DPDK_UADK 00:10:46.201 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:46.201 #define SPDK_CONFIG_EXAMPLES 1 00:10:46.201 #undef SPDK_CONFIG_FC 00:10:46.201 #define SPDK_CONFIG_FC_PATH 00:10:46.201 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:46.201 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:46.201 #define SPDK_CONFIG_FSDEV 1 00:10:46.201 #undef SPDK_CONFIG_FUSE 00:10:46.201 #undef SPDK_CONFIG_FUZZER 00:10:46.201 #define SPDK_CONFIG_FUZZER_LIB 00:10:46.201 #undef SPDK_CONFIG_GOLANG 00:10:46.201 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:46.201 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:46.202 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:46.202 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:46.202 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:46.202 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:46.202 #undef SPDK_CONFIG_HAVE_LZ4 00:10:46.202 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:46.202 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:46.202 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:46.202 #define SPDK_CONFIG_IDXD 1 00:10:46.202 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:46.202 #undef SPDK_CONFIG_IPSEC_MB 00:10:46.202 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:46.202 #define SPDK_CONFIG_ISAL 1 00:10:46.202 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:46.202 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:46.202 #define SPDK_CONFIG_LIBDIR 00:10:46.202 #undef SPDK_CONFIG_LTO 00:10:46.202 #define SPDK_CONFIG_MAX_LCORES 128 00:10:46.202 #define SPDK_CONFIG_NVME_CUSE 1 00:10:46.202 #undef SPDK_CONFIG_OCF 00:10:46.202 #define SPDK_CONFIG_OCF_PATH 00:10:46.202 #define SPDK_CONFIG_OPENSSL_PATH 00:10:46.202 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:46.202 #define SPDK_CONFIG_PGO_DIR 00:10:46.202 #undef SPDK_CONFIG_PGO_USE 00:10:46.202 #define SPDK_CONFIG_PREFIX /usr/local 00:10:46.202 #undef SPDK_CONFIG_RAID5F 00:10:46.202 #undef SPDK_CONFIG_RBD 00:10:46.202 #define SPDK_CONFIG_RDMA 1 00:10:46.202 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:46.202 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:46.202 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:46.202 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:46.202 #define SPDK_CONFIG_SHARED 1 00:10:46.202 #undef SPDK_CONFIG_SMA 00:10:46.202 #define SPDK_CONFIG_TESTS 1 00:10:46.202 #undef SPDK_CONFIG_TSAN 00:10:46.202 #define SPDK_CONFIG_UBLK 1 00:10:46.202 #define SPDK_CONFIG_UBSAN 1 00:10:46.202 #undef SPDK_CONFIG_UNIT_TESTS 00:10:46.202 #undef SPDK_CONFIG_URING 00:10:46.202 #define SPDK_CONFIG_URING_PATH 00:10:46.202 #undef SPDK_CONFIG_URING_ZNS 00:10:46.202 #undef SPDK_CONFIG_USDT 00:10:46.202 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:46.202 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:46.202 #define SPDK_CONFIG_VFIO_USER 1 00:10:46.202 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:46.202 #define SPDK_CONFIG_VHOST 1 00:10:46.202 #define SPDK_CONFIG_VIRTIO 1 00:10:46.202 #undef SPDK_CONFIG_VTUNE 00:10:46.202 #define SPDK_CONFIG_VTUNE_DIR 00:10:46.202 #define SPDK_CONFIG_WERROR 1 00:10:46.202 #define SPDK_CONFIG_WPDK_DIR 00:10:46.202 #undef SPDK_CONFIG_XNVME 00:10:46.202 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:46.202 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:46.464 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:46.464 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:46.464 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:46.464 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:46.464 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:46.464 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:46.464 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:46.464 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:46.464 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:46.464 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:46.464 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:46.464 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:46.464 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:46.464 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:46.464 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:46.464 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:46.464 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:46.464 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:46.465 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j96 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1938843 ]] 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1938843 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.02zdfk 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.02zdfk/tests/target /tmp/spdk.02zdfk 00:10:46.466 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=83166523392 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=95552409600 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12385886208 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=47764836352 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=47776202752 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=19087458304 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=19110481920 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23023616 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=46701281280 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=47776206848 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=1074925568 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=9555226624 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=9555238912 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:46.467 * Looking for test storage... 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=83166523392 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=14600478720 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.467 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:46.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.468 --rc genhtml_branch_coverage=1 00:10:46.468 --rc genhtml_function_coverage=1 00:10:46.468 --rc genhtml_legend=1 00:10:46.468 --rc geninfo_all_blocks=1 00:10:46.468 --rc geninfo_unexecuted_blocks=1 00:10:46.468 00:10:46.468 ' 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:46.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.468 --rc genhtml_branch_coverage=1 00:10:46.468 --rc genhtml_function_coverage=1 00:10:46.468 --rc genhtml_legend=1 00:10:46.468 --rc geninfo_all_blocks=1 00:10:46.468 --rc geninfo_unexecuted_blocks=1 00:10:46.468 00:10:46.468 ' 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:46.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.468 --rc genhtml_branch_coverage=1 00:10:46.468 --rc genhtml_function_coverage=1 00:10:46.468 --rc genhtml_legend=1 00:10:46.468 --rc geninfo_all_blocks=1 00:10:46.468 --rc geninfo_unexecuted_blocks=1 00:10:46.468 00:10:46.468 ' 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:46.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.468 --rc genhtml_branch_coverage=1 00:10:46.468 --rc genhtml_function_coverage=1 00:10:46.468 --rc genhtml_legend=1 00:10:46.468 --rc geninfo_all_blocks=1 00:10:46.468 --rc geninfo_unexecuted_blocks=1 00:10:46.468 00:10:46.468 ' 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.468 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:46.469 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:46.469 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:46.469 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:53.038 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.038 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:53.038 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:53.039 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:53.039 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:53.039 Found net devices under 0000:af:00.0: cvl_0_0 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:53.039 Found net devices under 0000:af:00.1: cvl_0_1 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:53.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:10:53.039 00:10:53.039 --- 10.0.0.2 ping statistics --- 00:10:53.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.039 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:10:53.039 00:10:53.039 --- 10.0.0.1 ping statistics --- 00:10:53.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.039 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:53.039 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:53.040 ************************************ 00:10:53.040 START TEST nvmf_filesystem_no_in_capsule 00:10:53.040 ************************************ 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1941946 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1941946 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1941946 ']' 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.040 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.040 [2024-10-06 11:05:49.820907] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:10:53.040 [2024-10-06 11:05:49.820945] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.040 [2024-10-06 11:05:49.877092] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.040 [2024-10-06 11:05:49.916241] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.040 [2024-10-06 11:05:49.916280] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.040 [2024-10-06 11:05:49.916287] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.040 [2024-10-06 11:05:49.916297] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.040 [2024-10-06 11:05:49.916302] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.040 [2024-10-06 11:05:49.917779] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.040 [2024-10-06 11:05:49.917879] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.040 [2024-10-06 11:05:49.917985] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.040 [2024-10-06 11:05:49.917986] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.040 [2024-10-06 11:05:50.077589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.040 Malloc1 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.040 [2024-10-06 11:05:50.228319] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.040 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:53.040 { 00:10:53.040 "name": "Malloc1", 00:10:53.040 "aliases": [ 00:10:53.040 "ab8fa197-e289-464b-87a8-c10eea99bb8a" 00:10:53.040 ], 00:10:53.040 "product_name": "Malloc disk", 00:10:53.040 "block_size": 512, 00:10:53.040 "num_blocks": 1048576, 00:10:53.040 "uuid": "ab8fa197-e289-464b-87a8-c10eea99bb8a", 00:10:53.040 "assigned_rate_limits": { 00:10:53.040 "rw_ios_per_sec": 0, 00:10:53.040 "rw_mbytes_per_sec": 0, 00:10:53.040 "r_mbytes_per_sec": 0, 00:10:53.040 "w_mbytes_per_sec": 0 00:10:53.040 }, 00:10:53.040 "claimed": true, 00:10:53.040 "claim_type": "exclusive_write", 00:10:53.040 "zoned": false, 00:10:53.040 "supported_io_types": { 00:10:53.040 "read": true, 00:10:53.040 "write": true, 00:10:53.040 "unmap": true, 00:10:53.040 "flush": true, 00:10:53.040 "reset": true, 00:10:53.040 "nvme_admin": false, 00:10:53.040 "nvme_io": false, 00:10:53.040 "nvme_io_md": false, 00:10:53.040 "write_zeroes": true, 00:10:53.040 "zcopy": true, 00:10:53.040 "get_zone_info": false, 00:10:53.040 "zone_management": false, 00:10:53.040 "zone_append": false, 00:10:53.040 "compare": false, 00:10:53.040 "compare_and_write": false, 00:10:53.040 "abort": true, 00:10:53.040 "seek_hole": false, 00:10:53.040 "seek_data": false, 00:10:53.040 "copy": true, 00:10:53.040 "nvme_iov_md": false 00:10:53.040 }, 00:10:53.040 "memory_domains": [ 00:10:53.040 { 00:10:53.040 "dma_device_id": "system", 00:10:53.040 "dma_device_type": 1 00:10:53.040 }, 00:10:53.040 { 00:10:53.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.041 "dma_device_type": 2 00:10:53.041 } 00:10:53.041 ], 00:10:53.041 "driver_specific": {} 00:10:53.041 } 00:10:53.041 ]' 00:10:53.041 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:53.041 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:53.041 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:53.041 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:53.041 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:53.041 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:53.041 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:53.041 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:53.976 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:53.976 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:53.976 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:53.976 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:53.976 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:56.506 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:56.506 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:56.506 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:56.506 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:56.506 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:56.506 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:56.506 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:56.506 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:56.506 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:56.506 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:56.506 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:56.506 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:56.506 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:56.506 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:56.506 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:56.506 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:56.506 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:56.506 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:57.072 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:58.448 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:58.448 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:58.448 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:58.448 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.448 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.448 ************************************ 00:10:58.448 START TEST filesystem_ext4 00:10:58.448 ************************************ 00:10:58.448 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:58.448 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:58.448 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:58.448 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:58.448 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:58.448 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:58.448 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:58.448 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:58.449 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:58.449 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:58.449 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:58.449 mke2fs 1.47.0 (5-Feb-2023) 00:10:58.449 Discarding device blocks: 0/522240 done 00:10:58.449 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:58.449 Filesystem UUID: 2e2ff4ec-8bd0-45fe-b2cf-5ed935a54659 00:10:58.449 Superblock backups stored on blocks: 00:10:58.449 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:58.449 00:10:58.449 Allocating group tables: 0/64 done 00:10:58.449 Writing inode tables: 0/64 done 00:10:58.449 Creating journal (8192 blocks): done 00:11:00.756 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:11:00.756 00:11:00.756 11:05:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:00.756 11:05:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1941946 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:07.315 00:11:07.315 real 0m8.070s 00:11:07.315 user 0m0.033s 00:11:07.315 sys 0m0.068s 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:07.315 ************************************ 00:11:07.315 END TEST filesystem_ext4 00:11:07.315 ************************************ 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.315 ************************************ 00:11:07.315 START TEST filesystem_btrfs 00:11:07.315 ************************************ 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:07.315 btrfs-progs v6.8.1 00:11:07.315 See https://btrfs.readthedocs.io for more information. 00:11:07.315 00:11:07.315 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:07.315 NOTE: several default settings have changed in version 5.15, please make sure 00:11:07.315 this does not affect your deployments: 00:11:07.315 - DUP for metadata (-m dup) 00:11:07.315 - enabled no-holes (-O no-holes) 00:11:07.315 - enabled free-space-tree (-R free-space-tree) 00:11:07.315 00:11:07.315 Label: (null) 00:11:07.315 UUID: 83cfe7ad-bf91-4005-b5de-b2199f8b7182 00:11:07.315 Node size: 16384 00:11:07.315 Sector size: 4096 (CPU page size: 4096) 00:11:07.315 Filesystem size: 510.00MiB 00:11:07.315 Block group profiles: 00:11:07.315 Data: single 8.00MiB 00:11:07.315 Metadata: DUP 32.00MiB 00:11:07.315 System: DUP 8.00MiB 00:11:07.315 SSD detected: yes 00:11:07.315 Zoned device: no 00:11:07.315 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:07.315 Checksum: crc32c 00:11:07.315 Number of devices: 1 00:11:07.315 Devices: 00:11:07.315 ID SIZE PATH 00:11:07.315 1 510.00MiB /dev/nvme0n1p1 00:11:07.315 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:07.315 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:07.315 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:07.315 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:07.315 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:07.315 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:07.315 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:07.315 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:07.315 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1941946 00:11:07.315 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:07.315 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:07.315 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:07.315 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:07.315 00:11:07.315 real 0m0.506s 00:11:07.315 user 0m0.017s 00:11:07.315 sys 0m0.121s 00:11:07.315 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.315 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:07.315 ************************************ 00:11:07.315 END TEST filesystem_btrfs 00:11:07.315 ************************************ 00:11:07.315 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:07.316 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:07.316 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.316 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.316 ************************************ 00:11:07.316 START TEST filesystem_xfs 00:11:07.316 ************************************ 00:11:07.316 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:07.316 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:07.316 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:07.316 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:07.316 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:07.316 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:07.316 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:07.316 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:07.316 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:07.316 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:07.316 11:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:07.316 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:07.316 = sectsz=512 attr=2, projid32bit=1 00:11:07.316 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:07.316 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:07.316 data = bsize=4096 blocks=130560, imaxpct=25 00:11:07.316 = sunit=0 swidth=0 blks 00:11:07.316 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:07.316 log =internal log bsize=4096 blocks=16384, version=2 00:11:07.316 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:07.316 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:07.882 Discarding blocks...Done. 00:11:07.882 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:07.882 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1941946 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.413 00:11:10.413 real 0m3.399s 00:11:10.413 user 0m0.022s 00:11:10.413 sys 0m0.079s 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:10.413 ************************************ 00:11:10.413 END TEST filesystem_xfs 00:11:10.413 ************************************ 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.413 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:10.414 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.414 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:10.414 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.414 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.414 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.414 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.414 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:10.414 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1941946 00:11:10.414 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1941946 ']' 00:11:10.414 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1941946 00:11:10.414 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:10.414 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:10.414 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1941946 00:11:10.672 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:10.672 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:10.672 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1941946' 00:11:10.672 killing process with pid 1941946 00:11:10.672 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1941946 00:11:10.672 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1941946 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:10.930 00:11:10.930 real 0m18.603s 00:11:10.930 user 1m13.283s 00:11:10.930 sys 0m1.402s 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.930 ************************************ 00:11:10.930 END TEST nvmf_filesystem_no_in_capsule 00:11:10.930 ************************************ 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:10.930 ************************************ 00:11:10.930 START TEST nvmf_filesystem_in_capsule 00:11:10.930 ************************************ 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1945561 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1945561 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1945561 ']' 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:10.930 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.930 [2024-10-06 11:06:08.494445] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:11:10.930 [2024-10-06 11:06:08.494492] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.187 [2024-10-06 11:06:08.553175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.187 [2024-10-06 11:06:08.591666] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.187 [2024-10-06 11:06:08.591711] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.187 [2024-10-06 11:06:08.591718] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.187 [2024-10-06 11:06:08.591724] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.187 [2024-10-06 11:06:08.591730] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.187 [2024-10-06 11:06:08.593235] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.187 [2024-10-06 11:06:08.593334] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.187 [2024-10-06 11:06:08.593442] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.187 [2024-10-06 11:06:08.593443] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.187 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:11.187 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:11.187 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:11.187 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:11.187 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.187 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.187 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:11.187 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:11.187 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.187 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.187 [2024-10-06 11:06:08.750390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.187 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.187 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:11.187 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.187 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.445 Malloc1 00:11:11.445 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.445 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:11.445 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.445 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.445 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.445 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:11.445 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.446 [2024-10-06 11:06:08.891964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:11.446 { 00:11:11.446 "name": "Malloc1", 00:11:11.446 "aliases": [ 00:11:11.446 "a7651cc4-d274-4fbf-b387-5ac14ebebe3e" 00:11:11.446 ], 00:11:11.446 "product_name": "Malloc disk", 00:11:11.446 "block_size": 512, 00:11:11.446 "num_blocks": 1048576, 00:11:11.446 "uuid": "a7651cc4-d274-4fbf-b387-5ac14ebebe3e", 00:11:11.446 "assigned_rate_limits": { 00:11:11.446 "rw_ios_per_sec": 0, 00:11:11.446 "rw_mbytes_per_sec": 0, 00:11:11.446 "r_mbytes_per_sec": 0, 00:11:11.446 "w_mbytes_per_sec": 0 00:11:11.446 }, 00:11:11.446 "claimed": true, 00:11:11.446 "claim_type": "exclusive_write", 00:11:11.446 "zoned": false, 00:11:11.446 "supported_io_types": { 00:11:11.446 "read": true, 00:11:11.446 "write": true, 00:11:11.446 "unmap": true, 00:11:11.446 "flush": true, 00:11:11.446 "reset": true, 00:11:11.446 "nvme_admin": false, 00:11:11.446 "nvme_io": false, 00:11:11.446 "nvme_io_md": false, 00:11:11.446 "write_zeroes": true, 00:11:11.446 "zcopy": true, 00:11:11.446 "get_zone_info": false, 00:11:11.446 "zone_management": false, 00:11:11.446 "zone_append": false, 00:11:11.446 "compare": false, 00:11:11.446 "compare_and_write": false, 00:11:11.446 "abort": true, 00:11:11.446 "seek_hole": false, 00:11:11.446 "seek_data": false, 00:11:11.446 "copy": true, 00:11:11.446 "nvme_iov_md": false 00:11:11.446 }, 00:11:11.446 "memory_domains": [ 00:11:11.446 { 00:11:11.446 "dma_device_id": "system", 00:11:11.446 "dma_device_type": 1 00:11:11.446 }, 00:11:11.446 { 00:11:11.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.446 "dma_device_type": 2 00:11:11.446 } 00:11:11.446 ], 00:11:11.446 "driver_specific": {} 00:11:11.446 } 00:11:11.446 ]' 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:11.446 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:11.446 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:11.446 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:11.446 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:11.446 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:11.446 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:12.821 11:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:12.821 11:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:12.821 11:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:12.821 11:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:12.821 11:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:14.720 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:14.720 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:14.720 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:14.720 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:14.720 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:14.720 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:14.720 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:14.720 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:14.720 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:14.720 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:14.720 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:14.720 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:14.720 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:14.720 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:14.720 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:14.720 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:14.720 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:14.977 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:15.235 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:16.170 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:16.170 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:16.170 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:16.170 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.170 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.170 ************************************ 00:11:16.170 START TEST filesystem_in_capsule_ext4 00:11:16.170 ************************************ 00:11:16.170 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:16.170 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:16.170 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:16.170 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:16.170 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:16.170 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:16.170 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:16.170 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:16.170 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:16.170 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:16.170 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:16.170 mke2fs 1.47.0 (5-Feb-2023) 00:11:16.428 Discarding device blocks: 0/522240 done 00:11:16.428 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:16.428 Filesystem UUID: a7c4b78b-e521-4990-b584-9109179da6df 00:11:16.428 Superblock backups stored on blocks: 00:11:16.428 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:16.428 00:11:16.428 Allocating group tables: 0/64 done 00:11:16.428 Writing inode tables: 0/64 done 00:11:16.428 Creating journal (8192 blocks): done 00:11:16.428 Writing superblocks and filesystem accounting information: 0/64 done 00:11:16.428 00:11:16.428 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:16.428 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1945561 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:22.997 00:11:22.997 real 0m5.662s 00:11:22.997 user 0m0.029s 00:11:22.997 sys 0m0.066s 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:22.997 ************************************ 00:11:22.997 END TEST filesystem_in_capsule_ext4 00:11:22.997 ************************************ 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.997 ************************************ 00:11:22.997 START TEST filesystem_in_capsule_btrfs 00:11:22.997 ************************************ 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:22.997 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:22.997 btrfs-progs v6.8.1 00:11:22.997 See https://btrfs.readthedocs.io for more information. 00:11:22.997 00:11:22.997 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:22.997 NOTE: several default settings have changed in version 5.15, please make sure 00:11:22.997 this does not affect your deployments: 00:11:22.997 - DUP for metadata (-m dup) 00:11:22.997 - enabled no-holes (-O no-holes) 00:11:22.997 - enabled free-space-tree (-R free-space-tree) 00:11:22.997 00:11:22.997 Label: (null) 00:11:22.997 UUID: b58e9cd1-5916-45a2-88f5-2d8e1c75ad9d 00:11:22.997 Node size: 16384 00:11:22.997 Sector size: 4096 (CPU page size: 4096) 00:11:22.997 Filesystem size: 510.00MiB 00:11:22.997 Block group profiles: 00:11:22.997 Data: single 8.00MiB 00:11:22.997 Metadata: DUP 32.00MiB 00:11:22.997 System: DUP 8.00MiB 00:11:22.997 SSD detected: yes 00:11:22.997 Zoned device: no 00:11:22.997 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:22.997 Checksum: crc32c 00:11:22.997 Number of devices: 1 00:11:22.997 Devices: 00:11:22.997 ID SIZE PATH 00:11:22.997 1 510.00MiB /dev/nvme0n1p1 00:11:22.998 00:11:22.998 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:22.998 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1945561 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:22.998 00:11:22.998 real 0m0.936s 00:11:22.998 user 0m0.033s 00:11:22.998 sys 0m0.108s 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:22.998 ************************************ 00:11:22.998 END TEST filesystem_in_capsule_btrfs 00:11:22.998 ************************************ 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.998 ************************************ 00:11:22.998 START TEST filesystem_in_capsule_xfs 00:11:22.998 ************************************ 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:22.998 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:22.998 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:22.998 = sectsz=512 attr=2, projid32bit=1 00:11:22.998 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:22.998 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:22.998 data = bsize=4096 blocks=130560, imaxpct=25 00:11:22.998 = sunit=0 swidth=0 blks 00:11:22.998 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:22.998 log =internal log bsize=4096 blocks=16384, version=2 00:11:22.998 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:22.998 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:23.968 Discarding blocks...Done. 00:11:23.968 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:23.968 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1945561 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:25.865 00:11:25.865 real 0m2.696s 00:11:25.865 user 0m0.026s 00:11:25.865 sys 0m0.072s 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:25.865 ************************************ 00:11:25.865 END TEST filesystem_in_capsule_xfs 00:11:25.865 ************************************ 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:25.865 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.123 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:26.123 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.123 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.123 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.123 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.123 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:26.123 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1945561 00:11:26.123 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1945561 ']' 00:11:26.123 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1945561 00:11:26.123 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:26.123 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.123 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1945561 00:11:26.123 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:26.123 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:26.123 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1945561' 00:11:26.123 killing process with pid 1945561 00:11:26.123 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1945561 00:11:26.123 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1945561 00:11:26.381 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:26.381 00:11:26.381 real 0m15.409s 00:11:26.381 user 1m0.631s 00:11:26.381 sys 0m1.323s 00:11:26.381 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.381 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.381 ************************************ 00:11:26.381 END TEST nvmf_filesystem_in_capsule 00:11:26.381 ************************************ 00:11:26.381 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:26.381 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:26.381 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:26.382 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.382 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:26.382 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.382 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.382 rmmod nvme_tcp 00:11:26.382 rmmod nvme_fabrics 00:11:26.382 rmmod nvme_keyring 00:11:26.382 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.382 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:26.382 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:26.382 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:26.382 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:26.382 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:26.382 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:26.382 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:26.382 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:11:26.382 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:11:26.382 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:26.640 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:26.640 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:26.640 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.640 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.640 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.545 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:28.545 00:11:28.545 real 0m42.500s 00:11:28.545 user 2m15.916s 00:11:28.545 sys 0m7.226s 00:11:28.545 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.545 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.545 ************************************ 00:11:28.545 END TEST nvmf_filesystem 00:11:28.545 ************************************ 00:11:28.545 11:06:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:28.545 11:06:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:28.545 11:06:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.545 11:06:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:28.545 ************************************ 00:11:28.545 START TEST nvmf_target_discovery 00:11:28.545 ************************************ 00:11:28.545 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:28.803 * Looking for test storage... 00:11:28.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.803 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:28.803 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:11:28.803 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:28.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.804 --rc genhtml_branch_coverage=1 00:11:28.804 --rc genhtml_function_coverage=1 00:11:28.804 --rc genhtml_legend=1 00:11:28.804 --rc geninfo_all_blocks=1 00:11:28.804 --rc geninfo_unexecuted_blocks=1 00:11:28.804 00:11:28.804 ' 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:28.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.804 --rc genhtml_branch_coverage=1 00:11:28.804 --rc genhtml_function_coverage=1 00:11:28.804 --rc genhtml_legend=1 00:11:28.804 --rc geninfo_all_blocks=1 00:11:28.804 --rc geninfo_unexecuted_blocks=1 00:11:28.804 00:11:28.804 ' 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:28.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.804 --rc genhtml_branch_coverage=1 00:11:28.804 --rc genhtml_function_coverage=1 00:11:28.804 --rc genhtml_legend=1 00:11:28.804 --rc geninfo_all_blocks=1 00:11:28.804 --rc geninfo_unexecuted_blocks=1 00:11:28.804 00:11:28.804 ' 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:28.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.804 --rc genhtml_branch_coverage=1 00:11:28.804 --rc genhtml_function_coverage=1 00:11:28.804 --rc genhtml_legend=1 00:11:28.804 --rc geninfo_all_blocks=1 00:11:28.804 --rc geninfo_unexecuted_blocks=1 00:11:28.804 00:11:28.804 ' 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:28.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:28.804 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:28.805 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:28.805 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:28.805 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:28.805 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:28.805 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.805 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:28.805 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:28.805 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:28.805 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.805 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.805 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.805 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:28.805 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:28.805 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:28.805 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:34.079 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:34.079 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:34.079 Found net devices under 0000:af:00.0: cvl_0_0 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:34.079 Found net devices under 0000:af:00.1: cvl_0_1 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:34.079 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:34.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:11:34.080 00:11:34.080 --- 10.0.0.2 ping statistics --- 00:11:34.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.080 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:11:34.080 00:11:34.080 --- 10.0.0.1 ping statistics --- 00:11:34.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.080 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=1951747 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 1951747 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1951747 ']' 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.080 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.080 [2024-10-06 11:06:31.647361] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:11:34.080 [2024-10-06 11:06:31.647407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.338 [2024-10-06 11:06:31.707147] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.338 [2024-10-06 11:06:31.745489] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.338 [2024-10-06 11:06:31.745533] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.338 [2024-10-06 11:06:31.745540] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.338 [2024-10-06 11:06:31.745550] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.338 [2024-10-06 11:06:31.745555] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.338 [2024-10-06 11:06:31.747078] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.338 [2024-10-06 11:06:31.747134] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.338 [2024-10-06 11:06:31.747226] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.338 [2024-10-06 11:06:31.747227] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.338 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.338 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:34.338 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:34.338 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:34.338 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.338 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.338 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:34.338 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.338 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.338 [2024-10-06 11:06:31.906118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.338 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.596 Null1 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.596 [2024-10-06 11:06:31.951652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.596 Null2 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.596 Null3 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:34.596 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.596 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.596 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.596 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:34.596 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.596 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.596 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.596 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:34.596 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.596 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.596 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.596 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:34.596 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:34.596 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.596 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.596 Null4 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.597 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:34.855 00:11:34.855 Discovery Log Number of Records 6, Generation counter 6 00:11:34.855 =====Discovery Log Entry 0====== 00:11:34.855 trtype: tcp 00:11:34.855 adrfam: ipv4 00:11:34.855 subtype: current discovery subsystem 00:11:34.855 treq: not required 00:11:34.855 portid: 0 00:11:34.855 trsvcid: 4420 00:11:34.855 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:34.855 traddr: 10.0.0.2 00:11:34.855 eflags: explicit discovery connections, duplicate discovery information 00:11:34.855 sectype: none 00:11:34.855 =====Discovery Log Entry 1====== 00:11:34.855 trtype: tcp 00:11:34.855 adrfam: ipv4 00:11:34.855 subtype: nvme subsystem 00:11:34.855 treq: not required 00:11:34.855 portid: 0 00:11:34.855 trsvcid: 4420 00:11:34.855 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:34.855 traddr: 10.0.0.2 00:11:34.855 eflags: none 00:11:34.855 sectype: none 00:11:34.855 =====Discovery Log Entry 2====== 00:11:34.855 trtype: tcp 00:11:34.855 adrfam: ipv4 00:11:34.855 subtype: nvme subsystem 00:11:34.855 treq: not required 00:11:34.855 portid: 0 00:11:34.855 trsvcid: 4420 00:11:34.855 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:34.855 traddr: 10.0.0.2 00:11:34.855 eflags: none 00:11:34.855 sectype: none 00:11:34.855 =====Discovery Log Entry 3====== 00:11:34.855 trtype: tcp 00:11:34.855 adrfam: ipv4 00:11:34.855 subtype: nvme subsystem 00:11:34.855 treq: not required 00:11:34.855 portid: 0 00:11:34.855 trsvcid: 4420 00:11:34.855 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:34.855 traddr: 10.0.0.2 00:11:34.855 eflags: none 00:11:34.855 sectype: none 00:11:34.855 =====Discovery Log Entry 4====== 00:11:34.855 trtype: tcp 00:11:34.855 adrfam: ipv4 00:11:34.855 subtype: nvme subsystem 00:11:34.855 treq: not required 00:11:34.855 portid: 0 00:11:34.855 trsvcid: 4420 00:11:34.855 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:34.855 traddr: 10.0.0.2 00:11:34.855 eflags: none 00:11:34.855 sectype: none 00:11:34.855 =====Discovery Log Entry 5====== 00:11:34.855 trtype: tcp 00:11:34.855 adrfam: ipv4 00:11:34.855 subtype: discovery subsystem referral 00:11:34.855 treq: not required 00:11:34.855 portid: 0 00:11:34.855 trsvcid: 4430 00:11:34.855 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:34.855 traddr: 10.0.0.2 00:11:34.855 eflags: none 00:11:34.855 sectype: none 00:11:34.855 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:34.855 Perform nvmf subsystem discovery via RPC 00:11:34.855 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:34.855 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.855 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.855 [ 00:11:34.855 { 00:11:34.855 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:34.855 "subtype": "Discovery", 00:11:34.855 "listen_addresses": [ 00:11:34.855 { 00:11:34.855 "trtype": "TCP", 00:11:34.855 "adrfam": "IPv4", 00:11:34.855 "traddr": "10.0.0.2", 00:11:34.855 "trsvcid": "4420" 00:11:34.856 } 00:11:34.856 ], 00:11:34.856 "allow_any_host": true, 00:11:34.856 "hosts": [] 00:11:34.856 }, 00:11:34.856 { 00:11:34.856 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:34.856 "subtype": "NVMe", 00:11:34.856 "listen_addresses": [ 00:11:34.856 { 00:11:34.856 "trtype": "TCP", 00:11:34.856 "adrfam": "IPv4", 00:11:34.856 "traddr": "10.0.0.2", 00:11:34.856 "trsvcid": "4420" 00:11:34.856 } 00:11:34.856 ], 00:11:34.856 "allow_any_host": true, 00:11:34.856 "hosts": [], 00:11:34.856 "serial_number": "SPDK00000000000001", 00:11:34.856 "model_number": "SPDK bdev Controller", 00:11:34.856 "max_namespaces": 32, 00:11:34.856 "min_cntlid": 1, 00:11:34.856 "max_cntlid": 65519, 00:11:34.856 "namespaces": [ 00:11:34.856 { 00:11:34.856 "nsid": 1, 00:11:34.856 "bdev_name": "Null1", 00:11:34.856 "name": "Null1", 00:11:34.856 "nguid": "267112901488418F8BC619DCC9280428", 00:11:34.856 "uuid": "26711290-1488-418f-8bc6-19dcc9280428" 00:11:34.856 } 00:11:34.856 ] 00:11:34.856 }, 00:11:34.856 { 00:11:34.856 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:34.856 "subtype": "NVMe", 00:11:34.856 "listen_addresses": [ 00:11:34.856 { 00:11:34.856 "trtype": "TCP", 00:11:34.856 "adrfam": "IPv4", 00:11:34.856 "traddr": "10.0.0.2", 00:11:34.856 "trsvcid": "4420" 00:11:34.856 } 00:11:34.856 ], 00:11:34.856 "allow_any_host": true, 00:11:34.856 "hosts": [], 00:11:34.856 "serial_number": "SPDK00000000000002", 00:11:34.856 "model_number": "SPDK bdev Controller", 00:11:34.856 "max_namespaces": 32, 00:11:34.856 "min_cntlid": 1, 00:11:34.856 "max_cntlid": 65519, 00:11:34.856 "namespaces": [ 00:11:34.856 { 00:11:34.856 "nsid": 1, 00:11:34.856 "bdev_name": "Null2", 00:11:34.856 "name": "Null2", 00:11:34.856 "nguid": "9DAFE099371E4802A73A1072A958B544", 00:11:34.856 "uuid": "9dafe099-371e-4802-a73a-1072a958b544" 00:11:34.856 } 00:11:34.856 ] 00:11:34.856 }, 00:11:34.856 { 00:11:34.856 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:34.856 "subtype": "NVMe", 00:11:34.856 "listen_addresses": [ 00:11:34.856 { 00:11:34.856 "trtype": "TCP", 00:11:34.856 "adrfam": "IPv4", 00:11:34.856 "traddr": "10.0.0.2", 00:11:34.856 "trsvcid": "4420" 00:11:34.856 } 00:11:34.856 ], 00:11:34.856 "allow_any_host": true, 00:11:34.856 "hosts": [], 00:11:34.856 "serial_number": "SPDK00000000000003", 00:11:34.856 "model_number": "SPDK bdev Controller", 00:11:34.856 "max_namespaces": 32, 00:11:34.856 "min_cntlid": 1, 00:11:34.856 "max_cntlid": 65519, 00:11:34.856 "namespaces": [ 00:11:34.856 { 00:11:34.856 "nsid": 1, 00:11:34.856 "bdev_name": "Null3", 00:11:34.856 "name": "Null3", 00:11:34.856 "nguid": "59A2FF4BC83545DD88D47F6F11925B9A", 00:11:34.856 "uuid": "59a2ff4b-c835-45dd-88d4-7f6f11925b9a" 00:11:34.856 } 00:11:34.856 ] 00:11:34.856 }, 00:11:34.856 { 00:11:34.856 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:34.856 "subtype": "NVMe", 00:11:34.856 "listen_addresses": [ 00:11:34.856 { 00:11:34.856 "trtype": "TCP", 00:11:34.856 "adrfam": "IPv4", 00:11:34.856 "traddr": "10.0.0.2", 00:11:34.856 "trsvcid": "4420" 00:11:34.856 } 00:11:34.856 ], 00:11:34.856 "allow_any_host": true, 00:11:34.856 "hosts": [], 00:11:34.856 "serial_number": "SPDK00000000000004", 00:11:34.856 "model_number": "SPDK bdev Controller", 00:11:34.856 "max_namespaces": 32, 00:11:34.856 "min_cntlid": 1, 00:11:34.856 "max_cntlid": 65519, 00:11:34.856 "namespaces": [ 00:11:34.856 { 00:11:34.856 "nsid": 1, 00:11:34.856 "bdev_name": "Null4", 00:11:34.856 "name": "Null4", 00:11:34.856 "nguid": "39352C92F986452BA9A74DE26201470F", 00:11:34.856 "uuid": "39352c92-f986-452b-a9a7-4de26201470f" 00:11:34.856 } 00:11:34.856 ] 00:11:34.856 } 00:11:34.856 ] 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:34.856 rmmod nvme_tcp 00:11:34.856 rmmod nvme_fabrics 00:11:34.856 rmmod nvme_keyring 00:11:34.856 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 1951747 ']' 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 1951747 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1951747 ']' 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1951747 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1951747 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1951747' 00:11:35.116 killing process with pid 1951747 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1951747 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1951747 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:35.116 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:11:35.374 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:35.374 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:35.374 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.374 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.374 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.277 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:37.277 00:11:37.277 real 0m8.647s 00:11:37.277 user 0m5.323s 00:11:37.277 sys 0m4.298s 00:11:37.277 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.277 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.277 ************************************ 00:11:37.277 END TEST nvmf_target_discovery 00:11:37.277 ************************************ 00:11:37.277 11:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:37.277 11:06:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:37.277 11:06:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.277 11:06:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:37.277 ************************************ 00:11:37.277 START TEST nvmf_referrals 00:11:37.277 ************************************ 00:11:37.277 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:37.537 * Looking for test storage... 00:11:37.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:37.537 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:37.537 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:37.537 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:37.537 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:37.537 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:37.537 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:37.537 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:37.537 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:37.537 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:37.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.537 --rc genhtml_branch_coverage=1 00:11:37.537 --rc genhtml_function_coverage=1 00:11:37.537 --rc genhtml_legend=1 00:11:37.537 --rc geninfo_all_blocks=1 00:11:37.537 --rc geninfo_unexecuted_blocks=1 00:11:37.537 00:11:37.537 ' 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:37.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.538 --rc genhtml_branch_coverage=1 00:11:37.538 --rc genhtml_function_coverage=1 00:11:37.538 --rc genhtml_legend=1 00:11:37.538 --rc geninfo_all_blocks=1 00:11:37.538 --rc geninfo_unexecuted_blocks=1 00:11:37.538 00:11:37.538 ' 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:37.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.538 --rc genhtml_branch_coverage=1 00:11:37.538 --rc genhtml_function_coverage=1 00:11:37.538 --rc genhtml_legend=1 00:11:37.538 --rc geninfo_all_blocks=1 00:11:37.538 --rc geninfo_unexecuted_blocks=1 00:11:37.538 00:11:37.538 ' 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:37.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.538 --rc genhtml_branch_coverage=1 00:11:37.538 --rc genhtml_function_coverage=1 00:11:37.538 --rc genhtml_legend=1 00:11:37.538 --rc geninfo_all_blocks=1 00:11:37.538 --rc geninfo_unexecuted_blocks=1 00:11:37.538 00:11:37.538 ' 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:37.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:37.538 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:42.822 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:42.823 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:42.823 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:42.823 Found net devices under 0000:af:00.0: cvl_0_0 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:42.823 Found net devices under 0000:af:00.1: cvl_0_1 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.823 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:43.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:11:43.082 00:11:43.082 --- 10.0.0.2 ping statistics --- 00:11:43.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.082 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:11:43.082 00:11:43.082 --- 10.0.0.1 ping statistics --- 00:11:43.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.082 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=1955450 00:11:43.082 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 1955450 00:11:43.083 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.083 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1955450 ']' 00:11:43.083 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.083 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:43.083 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.083 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:43.083 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.083 [2024-10-06 11:06:40.646143] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:11:43.083 [2024-10-06 11:06:40.646189] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.342 [2024-10-06 11:06:40.704608] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.342 [2024-10-06 11:06:40.742438] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.342 [2024-10-06 11:06:40.742478] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.342 [2024-10-06 11:06:40.742485] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.342 [2024-10-06 11:06:40.742491] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.342 [2024-10-06 11:06:40.742496] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.342 [2024-10-06 11:06:40.743917] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.342 [2024-10-06 11:06:40.744027] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.342 [2024-10-06 11:06:40.744160] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.342 [2024-10-06 11:06:40.744166] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.342 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:43.342 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:43.342 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:43.342 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:43.342 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.342 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.342 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:43.342 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.342 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.342 [2024-10-06 11:06:40.893109] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.342 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.342 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:43.342 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.342 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.342 [2024-10-06 11:06:40.909368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:43.342 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.342 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:43.342 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.342 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:43.601 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.601 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.601 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:43.601 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:43.601 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:43.601 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:43.601 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:43.601 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:43.601 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:43.601 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:43.860 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:44.118 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:44.377 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:44.377 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:44.377 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:44.377 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:44.377 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:44.377 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.377 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:44.636 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:44.636 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:44.636 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:44.636 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:44.636 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.636 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:44.636 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:44.636 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:44.636 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.636 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.636 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.636 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:44.636 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:44.636 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.636 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:44.636 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.636 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:44.636 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.636 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.935 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:44.935 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:44.935 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:44.935 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:44.935 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:44.935 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.935 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:44.935 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:44.935 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:44.935 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:44.935 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:44.935 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:44.935 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:44.935 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.935 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:45.218 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:45.218 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:45.218 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:45.218 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:45.218 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.218 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:45.218 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:45.218 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:45.218 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.218 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.218 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.218 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:45.218 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:45.218 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.218 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.218 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.505 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:45.505 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:45.505 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:45.505 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:45.505 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.505 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:45.505 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:45.505 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:45.505 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:45.505 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:45.505 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:45.505 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:45.505 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:45.505 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:45.505 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:45.505 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:45.505 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:45.505 rmmod nvme_tcp 00:11:45.505 rmmod nvme_fabrics 00:11:45.795 rmmod nvme_keyring 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 1955450 ']' 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 1955450 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1955450 ']' 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1955450 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1955450 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1955450' 00:11:45.795 killing process with pid 1955450 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1955450 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1955450 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:11:45.795 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:11:46.093 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:46.093 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:46.093 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.093 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.093 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.002 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:48.002 00:11:48.002 real 0m10.588s 00:11:48.002 user 0m12.617s 00:11:48.002 sys 0m4.954s 00:11:48.002 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:48.002 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:48.002 ************************************ 00:11:48.002 END TEST nvmf_referrals 00:11:48.002 ************************************ 00:11:48.003 11:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:48.003 11:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:48.003 11:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:48.003 11:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:48.003 ************************************ 00:11:48.003 START TEST nvmf_connect_disconnect 00:11:48.003 ************************************ 00:11:48.003 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:48.003 * Looking for test storage... 00:11:48.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.262 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:48.262 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:11:48.262 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:48.262 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:48.262 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.262 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.262 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.262 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.262 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.262 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.262 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:48.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.263 --rc genhtml_branch_coverage=1 00:11:48.263 --rc genhtml_function_coverage=1 00:11:48.263 --rc genhtml_legend=1 00:11:48.263 --rc geninfo_all_blocks=1 00:11:48.263 --rc geninfo_unexecuted_blocks=1 00:11:48.263 00:11:48.263 ' 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:48.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.263 --rc genhtml_branch_coverage=1 00:11:48.263 --rc genhtml_function_coverage=1 00:11:48.263 --rc genhtml_legend=1 00:11:48.263 --rc geninfo_all_blocks=1 00:11:48.263 --rc geninfo_unexecuted_blocks=1 00:11:48.263 00:11:48.263 ' 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:48.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.263 --rc genhtml_branch_coverage=1 00:11:48.263 --rc genhtml_function_coverage=1 00:11:48.263 --rc genhtml_legend=1 00:11:48.263 --rc geninfo_all_blocks=1 00:11:48.263 --rc geninfo_unexecuted_blocks=1 00:11:48.263 00:11:48.263 ' 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:48.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.263 --rc genhtml_branch_coverage=1 00:11:48.263 --rc genhtml_function_coverage=1 00:11:48.263 --rc genhtml_legend=1 00:11:48.263 --rc geninfo_all_blocks=1 00:11:48.263 --rc geninfo_unexecuted_blocks=1 00:11:48.263 00:11:48.263 ' 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:48.263 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.264 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.264 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.264 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:48.264 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:48.264 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:48.264 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:53.534 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:53.534 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:53.534 Found net devices under 0000:af:00.0: cvl_0_0 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:53.534 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:53.535 Found net devices under 0000:af:00.1: cvl_0_1 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:53.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:11:53.535 00:11:53.535 --- 10.0.0.2 ping statistics --- 00:11:53.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.535 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:11:53.535 00:11:53.535 --- 10.0.0.1 ping statistics --- 00:11:53.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.535 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=1959258 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 1959258 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1959258 ']' 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:53.535 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.535 [2024-10-06 11:06:50.898777] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:11:53.535 [2024-10-06 11:06:50.898820] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.535 [2024-10-06 11:06:50.956362] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.535 [2024-10-06 11:06:50.996297] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.535 [2024-10-06 11:06:50.996339] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.535 [2024-10-06 11:06:50.996346] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.535 [2024-10-06 11:06:50.996352] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.535 [2024-10-06 11:06:50.996357] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.535 [2024-10-06 11:06:50.997788] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.535 [2024-10-06 11:06:50.997890] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.535 [2024-10-06 11:06:50.997932] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.535 [2024-10-06 11:06:50.997934] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.535 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:53.535 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:53.535 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:53.535 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:53.535 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.794 [2024-10-06 11:06:51.143047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.794 [2024-10-06 11:06:51.194459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:53.794 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:56.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.182 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:45.182 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:45.182 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:45.182 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:15:45.182 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:45.182 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:15:45.182 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:45.182 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:45.182 rmmod nvme_tcp 00:15:45.182 rmmod nvme_fabrics 00:15:45.182 rmmod nvme_keyring 00:15:45.182 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:45.182 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:15:45.182 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:15:45.182 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 1959258 ']' 00:15:45.182 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 1959258 00:15:45.182 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1959258 ']' 00:15:45.182 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1959258 00:15:45.183 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:15:45.183 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:45.183 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1959258 00:15:45.183 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:45.183 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:45.183 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1959258' 00:15:45.183 killing process with pid 1959258 00:15:45.183 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1959258 00:15:45.183 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1959258 00:15:45.442 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:45.442 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:45.442 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:45.442 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:15:45.442 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:15:45.442 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:45.442 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:15:45.442 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:45.442 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:45.442 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.442 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.442 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.348 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:47.348 00:15:47.348 real 3m59.365s 00:15:47.348 user 15m15.747s 00:15:47.348 sys 0m25.747s 00:15:47.348 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:47.348 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:47.348 ************************************ 00:15:47.348 END TEST nvmf_connect_disconnect 00:15:47.348 ************************************ 00:15:47.348 11:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:47.348 11:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:47.348 11:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:47.348 11:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:47.608 ************************************ 00:15:47.608 START TEST nvmf_multitarget 00:15:47.608 ************************************ 00:15:47.608 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:47.608 * Looking for test storage... 00:15:47.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:47.608 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:47.608 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:15:47.608 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:47.608 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:47.608 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:47.608 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:47.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.609 --rc genhtml_branch_coverage=1 00:15:47.609 --rc genhtml_function_coverage=1 00:15:47.609 --rc genhtml_legend=1 00:15:47.609 --rc geninfo_all_blocks=1 00:15:47.609 --rc geninfo_unexecuted_blocks=1 00:15:47.609 00:15:47.609 ' 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:47.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.609 --rc genhtml_branch_coverage=1 00:15:47.609 --rc genhtml_function_coverage=1 00:15:47.609 --rc genhtml_legend=1 00:15:47.609 --rc geninfo_all_blocks=1 00:15:47.609 --rc geninfo_unexecuted_blocks=1 00:15:47.609 00:15:47.609 ' 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:47.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.609 --rc genhtml_branch_coverage=1 00:15:47.609 --rc genhtml_function_coverage=1 00:15:47.609 --rc genhtml_legend=1 00:15:47.609 --rc geninfo_all_blocks=1 00:15:47.609 --rc geninfo_unexecuted_blocks=1 00:15:47.609 00:15:47.609 ' 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:47.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.609 --rc genhtml_branch_coverage=1 00:15:47.609 --rc genhtml_function_coverage=1 00:15:47.609 --rc genhtml_legend=1 00:15:47.609 --rc geninfo_all_blocks=1 00:15:47.609 --rc geninfo_unexecuted_blocks=1 00:15:47.609 00:15:47.609 ' 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:47.609 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:47.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:15:47.610 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:54.181 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:54.181 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:54.181 Found net devices under 0000:af:00.0: cvl_0_0 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:54.181 Found net devices under 0000:af:00.1: cvl_0_1 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:54.181 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:54.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:15:54.182 00:15:54.182 --- 10.0.0.2 ping statistics --- 00:15:54.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.182 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:54.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:15:54.182 00:15:54.182 --- 10.0.0.1 ping statistics --- 00:15:54.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.182 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=2002117 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 2002117 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2002117 ']' 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:54.182 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:54.182 [2024-10-06 11:10:51.001055] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:15:54.182 [2024-10-06 11:10:51.001103] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.182 [2024-10-06 11:10:51.060472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:54.182 [2024-10-06 11:10:51.099710] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.182 [2024-10-06 11:10:51.099749] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.182 [2024-10-06 11:10:51.099756] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.182 [2024-10-06 11:10:51.099763] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.182 [2024-10-06 11:10:51.099767] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.182 [2024-10-06 11:10:51.101126] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.182 [2024-10-06 11:10:51.101146] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.182 [2024-10-06 11:10:51.101219] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.182 [2024-10-06 11:10:51.101218] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:54.182 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:54.182 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:15:54.182 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:54.182 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:54.182 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:54.182 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.182 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:54.182 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:54.182 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:54.182 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:54.182 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:54.182 "nvmf_tgt_1" 00:15:54.182 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:54.182 "nvmf_tgt_2" 00:15:54.182 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:54.182 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:54.182 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:54.182 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:54.442 true 00:15:54.442 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:54.442 true 00:15:54.442 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:54.442 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:54.701 rmmod nvme_tcp 00:15:54.701 rmmod nvme_fabrics 00:15:54.701 rmmod nvme_keyring 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 2002117 ']' 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 2002117 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2002117 ']' 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2002117 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2002117 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2002117' 00:15:54.701 killing process with pid 2002117 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2002117 00:15:54.701 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2002117 00:15:54.961 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:54.961 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:54.961 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:54.961 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:15:54.961 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:15:54.961 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:54.961 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:15:54.961 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:54.961 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:54.961 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.961 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.961 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.867 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:56.867 00:15:56.867 real 0m9.469s 00:15:56.867 user 0m7.266s 00:15:56.867 sys 0m4.760s 00:15:56.867 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.867 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:56.867 ************************************ 00:15:56.867 END TEST nvmf_multitarget 00:15:56.867 ************************************ 00:15:56.867 11:10:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:56.867 11:10:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:56.867 11:10:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.867 11:10:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:57.126 ************************************ 00:15:57.126 START TEST nvmf_rpc 00:15:57.126 ************************************ 00:15:57.126 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:57.126 * Looking for test storage... 00:15:57.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:57.126 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:57.126 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:15:57.126 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:57.126 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:57.126 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.126 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.126 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.126 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.126 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.126 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.126 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.126 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.126 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.126 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.126 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.126 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:57.126 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:57.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.127 --rc genhtml_branch_coverage=1 00:15:57.127 --rc genhtml_function_coverage=1 00:15:57.127 --rc genhtml_legend=1 00:15:57.127 --rc geninfo_all_blocks=1 00:15:57.127 --rc geninfo_unexecuted_blocks=1 00:15:57.127 00:15:57.127 ' 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:57.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.127 --rc genhtml_branch_coverage=1 00:15:57.127 --rc genhtml_function_coverage=1 00:15:57.127 --rc genhtml_legend=1 00:15:57.127 --rc geninfo_all_blocks=1 00:15:57.127 --rc geninfo_unexecuted_blocks=1 00:15:57.127 00:15:57.127 ' 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:57.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.127 --rc genhtml_branch_coverage=1 00:15:57.127 --rc genhtml_function_coverage=1 00:15:57.127 --rc genhtml_legend=1 00:15:57.127 --rc geninfo_all_blocks=1 00:15:57.127 --rc geninfo_unexecuted_blocks=1 00:15:57.127 00:15:57.127 ' 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:57.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.127 --rc genhtml_branch_coverage=1 00:15:57.127 --rc genhtml_function_coverage=1 00:15:57.127 --rc genhtml_legend=1 00:15:57.127 --rc geninfo_all_blocks=1 00:15:57.127 --rc geninfo_unexecuted_blocks=1 00:15:57.127 00:15:57.127 ' 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:57.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:15:57.127 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:03.700 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:03.700 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:03.700 Found net devices under 0000:af:00.0: cvl_0_0 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:03.700 Found net devices under 0000:af:00.1: cvl_0_1 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.700 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:03.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:16:03.701 00:16:03.701 --- 10.0.0.2 ping statistics --- 00:16:03.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.701 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:03.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:16:03.701 00:16:03.701 --- 10.0.0.1 ping statistics --- 00:16:03.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.701 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=2005837 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 2005837 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2005837 ']' 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.701 [2024-10-06 11:11:00.592596] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:16:03.701 [2024-10-06 11:11:00.592642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.701 [2024-10-06 11:11:00.652228] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.701 [2024-10-06 11:11:00.691764] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.701 [2024-10-06 11:11:00.691805] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.701 [2024-10-06 11:11:00.691812] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.701 [2024-10-06 11:11:00.691818] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.701 [2024-10-06 11:11:00.691823] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.701 [2024-10-06 11:11:00.693336] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.701 [2024-10-06 11:11:00.693353] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.701 [2024-10-06 11:11:00.693444] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.701 [2024-10-06 11:11:00.693445] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:03.701 "tick_rate": 2100000000, 00:16:03.701 "poll_groups": [ 00:16:03.701 { 00:16:03.701 "name": "nvmf_tgt_poll_group_000", 00:16:03.701 "admin_qpairs": 0, 00:16:03.701 "io_qpairs": 0, 00:16:03.701 "current_admin_qpairs": 0, 00:16:03.701 "current_io_qpairs": 0, 00:16:03.701 "pending_bdev_io": 0, 00:16:03.701 "completed_nvme_io": 0, 00:16:03.701 "transports": [] 00:16:03.701 }, 00:16:03.701 { 00:16:03.701 "name": "nvmf_tgt_poll_group_001", 00:16:03.701 "admin_qpairs": 0, 00:16:03.701 "io_qpairs": 0, 00:16:03.701 "current_admin_qpairs": 0, 00:16:03.701 "current_io_qpairs": 0, 00:16:03.701 "pending_bdev_io": 0, 00:16:03.701 "completed_nvme_io": 0, 00:16:03.701 "transports": [] 00:16:03.701 }, 00:16:03.701 { 00:16:03.701 "name": "nvmf_tgt_poll_group_002", 00:16:03.701 "admin_qpairs": 0, 00:16:03.701 "io_qpairs": 0, 00:16:03.701 "current_admin_qpairs": 0, 00:16:03.701 "current_io_qpairs": 0, 00:16:03.701 "pending_bdev_io": 0, 00:16:03.701 "completed_nvme_io": 0, 00:16:03.701 "transports": [] 00:16:03.701 }, 00:16:03.701 { 00:16:03.701 "name": "nvmf_tgt_poll_group_003", 00:16:03.701 "admin_qpairs": 0, 00:16:03.701 "io_qpairs": 0, 00:16:03.701 "current_admin_qpairs": 0, 00:16:03.701 "current_io_qpairs": 0, 00:16:03.701 "pending_bdev_io": 0, 00:16:03.701 "completed_nvme_io": 0, 00:16:03.701 "transports": [] 00:16:03.701 } 00:16:03.701 ] 00:16:03.701 }' 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.701 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.701 [2024-10-06 11:11:00.955332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.702 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.702 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:03.702 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.702 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.702 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.702 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:03.702 "tick_rate": 2100000000, 00:16:03.702 "poll_groups": [ 00:16:03.702 { 00:16:03.702 "name": "nvmf_tgt_poll_group_000", 00:16:03.702 "admin_qpairs": 0, 00:16:03.702 "io_qpairs": 0, 00:16:03.702 "current_admin_qpairs": 0, 00:16:03.702 "current_io_qpairs": 0, 00:16:03.702 "pending_bdev_io": 0, 00:16:03.702 "completed_nvme_io": 0, 00:16:03.702 "transports": [ 00:16:03.702 { 00:16:03.702 "trtype": "TCP" 00:16:03.702 } 00:16:03.702 ] 00:16:03.702 }, 00:16:03.702 { 00:16:03.702 "name": "nvmf_tgt_poll_group_001", 00:16:03.702 "admin_qpairs": 0, 00:16:03.702 "io_qpairs": 0, 00:16:03.702 "current_admin_qpairs": 0, 00:16:03.702 "current_io_qpairs": 0, 00:16:03.702 "pending_bdev_io": 0, 00:16:03.702 "completed_nvme_io": 0, 00:16:03.702 "transports": [ 00:16:03.702 { 00:16:03.702 "trtype": "TCP" 00:16:03.702 } 00:16:03.702 ] 00:16:03.702 }, 00:16:03.702 { 00:16:03.702 "name": "nvmf_tgt_poll_group_002", 00:16:03.702 "admin_qpairs": 0, 00:16:03.702 "io_qpairs": 0, 00:16:03.702 "current_admin_qpairs": 0, 00:16:03.702 "current_io_qpairs": 0, 00:16:03.702 "pending_bdev_io": 0, 00:16:03.702 "completed_nvme_io": 0, 00:16:03.702 "transports": [ 00:16:03.702 { 00:16:03.702 "trtype": "TCP" 00:16:03.702 } 00:16:03.702 ] 00:16:03.702 }, 00:16:03.702 { 00:16:03.702 "name": "nvmf_tgt_poll_group_003", 00:16:03.702 "admin_qpairs": 0, 00:16:03.702 "io_qpairs": 0, 00:16:03.702 "current_admin_qpairs": 0, 00:16:03.702 "current_io_qpairs": 0, 00:16:03.702 "pending_bdev_io": 0, 00:16:03.702 "completed_nvme_io": 0, 00:16:03.702 "transports": [ 00:16:03.702 { 00:16:03.702 "trtype": "TCP" 00:16:03.702 } 00:16:03.702 ] 00:16:03.702 } 00:16:03.702 ] 00:16:03.702 }' 00:16:03.702 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:03.702 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:03.702 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:03.702 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.702 Malloc1 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.702 [2024-10-06 11:11:01.131026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:03.702 [2024-10-06 11:11:01.165673] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:03.702 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:03.702 could not add new controller: failed to write to nvme-fabrics device 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.702 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:05.081 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:05.081 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:05.081 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:05.081 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:05.081 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:06.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:06.985 [2024-10-06 11:11:04.479572] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:06.985 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:06.985 could not add new controller: failed to write to nvme-fabrics device 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.985 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:08.361 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:08.361 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:08.361 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.361 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:08.361 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:10.289 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:10.289 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:10.289 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:10.289 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:10.289 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.289 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:10.290 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:10.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.549 [2024-10-06 11:11:07.933420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.549 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:11.926 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:11.926 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:11.926 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:11.926 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:11.926 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:13.832 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:13.832 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:13.832 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:13.832 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:13.832 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:13.832 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:13.832 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:13.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.832 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:13.832 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:13.832 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:13.832 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:13.832 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:13.832 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:13.832 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.833 [2024-10-06 11:11:11.282347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.833 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:15.212 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:15.212 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:15.212 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.212 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:15.212 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:17.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.118 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.119 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.119 [2024-10-06 11:11:14.591078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.119 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.119 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:17.119 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.119 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.119 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.119 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:17.119 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.119 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.119 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.119 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:18.497 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:18.497 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:18.497 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.497 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:18.497 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:20.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.403 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.404 [2024-10-06 11:11:17.930539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.404 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:21.779 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:21.779 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:21.779 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:21.779 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:21.779 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:23.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.684 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.942 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.942 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:23.942 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:23.942 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.942 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.942 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.942 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:23.942 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.942 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.942 [2024-10-06 11:11:21.279507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.942 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.943 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:23.943 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.943 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.943 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.943 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:23.943 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.943 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.943 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.943 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:24.879 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:24.879 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:24.879 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:24.879 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:24.879 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:27.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 [2024-10-06 11:11:24.553538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 [2024-10-06 11:11:24.601638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 [2024-10-06 11:11:24.649778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.414 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.415 [2024-10-06 11:11:24.697958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.415 [2024-10-06 11:11:24.746134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:27.415 "tick_rate": 2100000000, 00:16:27.415 "poll_groups": [ 00:16:27.415 { 00:16:27.415 "name": "nvmf_tgt_poll_group_000", 00:16:27.415 "admin_qpairs": 2, 00:16:27.415 "io_qpairs": 168, 00:16:27.415 "current_admin_qpairs": 0, 00:16:27.415 "current_io_qpairs": 0, 00:16:27.415 "pending_bdev_io": 0, 00:16:27.415 "completed_nvme_io": 267, 00:16:27.415 "transports": [ 00:16:27.415 { 00:16:27.415 "trtype": "TCP" 00:16:27.415 } 00:16:27.415 ] 00:16:27.415 }, 00:16:27.415 { 00:16:27.415 "name": "nvmf_tgt_poll_group_001", 00:16:27.415 "admin_qpairs": 2, 00:16:27.415 "io_qpairs": 168, 00:16:27.415 "current_admin_qpairs": 0, 00:16:27.415 "current_io_qpairs": 0, 00:16:27.415 "pending_bdev_io": 0, 00:16:27.415 "completed_nvme_io": 220, 00:16:27.415 "transports": [ 00:16:27.415 { 00:16:27.415 "trtype": "TCP" 00:16:27.415 } 00:16:27.415 ] 00:16:27.415 }, 00:16:27.415 { 00:16:27.415 "name": "nvmf_tgt_poll_group_002", 00:16:27.415 "admin_qpairs": 1, 00:16:27.415 "io_qpairs": 168, 00:16:27.415 "current_admin_qpairs": 0, 00:16:27.415 "current_io_qpairs": 0, 00:16:27.415 "pending_bdev_io": 0, 00:16:27.415 "completed_nvme_io": 268, 00:16:27.415 "transports": [ 00:16:27.415 { 00:16:27.415 "trtype": "TCP" 00:16:27.415 } 00:16:27.415 ] 00:16:27.415 }, 00:16:27.415 { 00:16:27.415 "name": "nvmf_tgt_poll_group_003", 00:16:27.415 "admin_qpairs": 2, 00:16:27.415 "io_qpairs": 168, 00:16:27.415 "current_admin_qpairs": 0, 00:16:27.415 "current_io_qpairs": 0, 00:16:27.415 "pending_bdev_io": 0, 00:16:27.415 "completed_nvme_io": 267, 00:16:27.415 "transports": [ 00:16:27.415 { 00:16:27.415 "trtype": "TCP" 00:16:27.415 } 00:16:27.415 ] 00:16:27.415 } 00:16:27.415 ] 00:16:27.415 }' 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:27.415 rmmod nvme_tcp 00:16:27.415 rmmod nvme_fabrics 00:16:27.415 rmmod nvme_keyring 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 2005837 ']' 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 2005837 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2005837 ']' 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2005837 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:27.415 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2005837 00:16:27.675 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:27.675 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:27.675 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2005837' 00:16:27.675 killing process with pid 2005837 00:16:27.675 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2005837 00:16:27.675 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2005837 00:16:27.675 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:27.675 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:27.675 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:27.675 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:27.675 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:16:27.675 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:27.675 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:16:27.675 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:27.675 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:27.675 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.675 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:27.675 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.275 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:30.275 00:16:30.275 real 0m32.831s 00:16:30.275 user 1m39.312s 00:16:30.275 sys 0m6.379s 00:16:30.275 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:30.275 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.275 ************************************ 00:16:30.275 END TEST nvmf_rpc 00:16:30.275 ************************************ 00:16:30.275 11:11:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:30.275 11:11:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:30.275 11:11:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:30.275 11:11:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:30.275 ************************************ 00:16:30.275 START TEST nvmf_invalid 00:16:30.275 ************************************ 00:16:30.275 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:30.275 * Looking for test storage... 00:16:30.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:30.275 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:30.275 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:16:30.275 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:30.275 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:30.275 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.275 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.275 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:30.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.276 --rc genhtml_branch_coverage=1 00:16:30.276 --rc genhtml_function_coverage=1 00:16:30.276 --rc genhtml_legend=1 00:16:30.276 --rc geninfo_all_blocks=1 00:16:30.276 --rc geninfo_unexecuted_blocks=1 00:16:30.276 00:16:30.276 ' 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:30.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.276 --rc genhtml_branch_coverage=1 00:16:30.276 --rc genhtml_function_coverage=1 00:16:30.276 --rc genhtml_legend=1 00:16:30.276 --rc geninfo_all_blocks=1 00:16:30.276 --rc geninfo_unexecuted_blocks=1 00:16:30.276 00:16:30.276 ' 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:30.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.276 --rc genhtml_branch_coverage=1 00:16:30.276 --rc genhtml_function_coverage=1 00:16:30.276 --rc genhtml_legend=1 00:16:30.276 --rc geninfo_all_blocks=1 00:16:30.276 --rc geninfo_unexecuted_blocks=1 00:16:30.276 00:16:30.276 ' 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:30.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.276 --rc genhtml_branch_coverage=1 00:16:30.276 --rc genhtml_function_coverage=1 00:16:30.276 --rc genhtml_legend=1 00:16:30.276 --rc geninfo_all_blocks=1 00:16:30.276 --rc geninfo_unexecuted_blocks=1 00:16:30.276 00:16:30.276 ' 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.276 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:30.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:30.277 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:35.612 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:35.613 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:35.613 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:35.613 Found net devices under 0000:af:00.0: cvl_0_0 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:35.613 Found net devices under 0000:af:00.1: cvl_0_1 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:35.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:16:35.613 00:16:35.613 --- 10.0.0.2 ping statistics --- 00:16:35.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.613 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:35.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:16:35.613 00:16:35.613 --- 10.0.0.1 ping statistics --- 00:16:35.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.613 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=2013396 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 2013396 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2013396 ']' 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:35.613 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:35.613 [2024-10-06 11:11:32.904359] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:16:35.614 [2024-10-06 11:11:32.904405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.614 [2024-10-06 11:11:32.966096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:35.614 [2024-10-06 11:11:33.005156] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.614 [2024-10-06 11:11:33.005199] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.614 [2024-10-06 11:11:33.005206] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.614 [2024-10-06 11:11:33.005212] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.614 [2024-10-06 11:11:33.005217] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.614 [2024-10-06 11:11:33.006708] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.614 [2024-10-06 11:11:33.006729] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.614 [2024-10-06 11:11:33.006803] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.614 [2024-10-06 11:11:33.006802] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:16:35.614 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:35.614 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:16:35.614 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:35.614 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:35.614 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:35.614 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.614 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:35.614 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24249 00:16:35.873 [2024-10-06 11:11:33.323554] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:35.873 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:35.873 { 00:16:35.873 "nqn": "nqn.2016-06.io.spdk:cnode24249", 00:16:35.873 "tgt_name": "foobar", 00:16:35.873 "method": "nvmf_create_subsystem", 00:16:35.873 "req_id": 1 00:16:35.873 } 00:16:35.873 Got JSON-RPC error response 00:16:35.873 response: 00:16:35.873 { 00:16:35.873 "code": -32603, 00:16:35.873 "message": "Unable to find target foobar" 00:16:35.873 }' 00:16:35.873 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:35.873 { 00:16:35.873 "nqn": "nqn.2016-06.io.spdk:cnode24249", 00:16:35.873 "tgt_name": "foobar", 00:16:35.873 "method": "nvmf_create_subsystem", 00:16:35.873 "req_id": 1 00:16:35.873 } 00:16:35.873 Got JSON-RPC error response 00:16:35.873 response: 00:16:35.873 { 00:16:35.873 "code": -32603, 00:16:35.873 "message": "Unable to find target foobar" 00:16:35.873 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:35.873 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:35.873 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3501 00:16:36.132 [2024-10-06 11:11:33.532289] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3501: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:36.132 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:36.132 { 00:16:36.132 "nqn": "nqn.2016-06.io.spdk:cnode3501", 00:16:36.132 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:36.132 "method": "nvmf_create_subsystem", 00:16:36.132 "req_id": 1 00:16:36.132 } 00:16:36.132 Got JSON-RPC error response 00:16:36.132 response: 00:16:36.132 { 00:16:36.132 "code": -32602, 00:16:36.132 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:36.132 }' 00:16:36.132 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:36.132 { 00:16:36.132 "nqn": "nqn.2016-06.io.spdk:cnode3501", 00:16:36.132 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:36.132 "method": "nvmf_create_subsystem", 00:16:36.132 "req_id": 1 00:16:36.132 } 00:16:36.132 Got JSON-RPC error response 00:16:36.132 response: 00:16:36.132 { 00:16:36.132 "code": -32602, 00:16:36.132 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:36.132 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:36.132 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:36.132 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12287 00:16:36.391 [2024-10-06 11:11:33.736949] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12287: invalid model number 'SPDK_Controller' 00:16:36.391 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:36.391 { 00:16:36.391 "nqn": "nqn.2016-06.io.spdk:cnode12287", 00:16:36.391 "model_number": "SPDK_Controller\u001f", 00:16:36.391 "method": "nvmf_create_subsystem", 00:16:36.391 "req_id": 1 00:16:36.391 } 00:16:36.391 Got JSON-RPC error response 00:16:36.391 response: 00:16:36.391 { 00:16:36.391 "code": -32602, 00:16:36.391 "message": "Invalid MN SPDK_Controller\u001f" 00:16:36.391 }' 00:16:36.391 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:36.391 { 00:16:36.391 "nqn": "nqn.2016-06.io.spdk:cnode12287", 00:16:36.391 "model_number": "SPDK_Controller\u001f", 00:16:36.391 "method": "nvmf_create_subsystem", 00:16:36.391 "req_id": 1 00:16:36.391 } 00:16:36.391 Got JSON-RPC error response 00:16:36.391 response: 00:16:36.391 { 00:16:36.391 "code": -32602, 00:16:36.391 "message": "Invalid MN SPDK_Controller\u001f" 00:16:36.391 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:36.391 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:36.391 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:36.391 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:36.391 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:36.391 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:36.391 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:36.391 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.391 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:16:36.391 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:16:36.391 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:16:36.391 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.391 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.391 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:16:36.391 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:36.391 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:16:36.391 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ y == \- ]] 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'ylFF*u2tNp+v_tzM'\''V.D/' 00:16:36.392 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'ylFF*u2tNp+v_tzM'\''V.D/' nqn.2016-06.io.spdk:cnode8586 00:16:36.652 [2024-10-06 11:11:34.082106] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8586: invalid serial number 'ylFF*u2tNp+v_tzM'V.D/' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:36.652 { 00:16:36.652 "nqn": "nqn.2016-06.io.spdk:cnode8586", 00:16:36.652 "serial_number": "ylFF*u2tNp+v_tzM'\''V.D/", 00:16:36.652 "method": "nvmf_create_subsystem", 00:16:36.652 "req_id": 1 00:16:36.652 } 00:16:36.652 Got JSON-RPC error response 00:16:36.652 response: 00:16:36.652 { 00:16:36.652 "code": -32602, 00:16:36.652 "message": "Invalid SN ylFF*u2tNp+v_tzM'\''V.D/" 00:16:36.652 }' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:36.652 { 00:16:36.652 "nqn": "nqn.2016-06.io.spdk:cnode8586", 00:16:36.652 "serial_number": "ylFF*u2tNp+v_tzM'V.D/", 00:16:36.652 "method": "nvmf_create_subsystem", 00:16:36.652 "req_id": 1 00:16:36.652 } 00:16:36.652 Got JSON-RPC error response 00:16:36.652 response: 00:16:36.652 { 00:16:36.652 "code": -32602, 00:16:36.652 "message": "Invalid SN ylFF*u2tNp+v_tzM'V.D/" 00:16:36.652 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:16:36.652 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.653 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.653 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:16:36.653 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:16:36.653 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:16:36.653 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.653 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.653 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:36.653 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:36.912 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 3 == \- ]] 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '3uwZ| qLi.dU"A$v("fJNR"Ag(#w1k*il}-(et$Q~' 00:16:36.913 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '3uwZ| qLi.dU"A$v("fJNR"Ag(#w1k*il}-(et$Q~' nqn.2016-06.io.spdk:cnode29681 00:16:37.172 [2024-10-06 11:11:34.551686] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29681: invalid model number '3uwZ| qLi.dU"A$v("fJNR"Ag(#w1k*il}-(et$Q~' 00:16:37.172 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:37.172 { 00:16:37.172 "nqn": "nqn.2016-06.io.spdk:cnode29681", 00:16:37.172 "model_number": "3uwZ| qLi.dU\"A$v(\"fJNR\"Ag(#w1k*il}-(et$Q~", 00:16:37.172 "method": "nvmf_create_subsystem", 00:16:37.172 "req_id": 1 00:16:37.172 } 00:16:37.172 Got JSON-RPC error response 00:16:37.172 response: 00:16:37.172 { 00:16:37.172 "code": -32602, 00:16:37.172 "message": "Invalid MN 3uwZ| qLi.dU\"A$v(\"fJNR\"Ag(#w1k*il}-(et$Q~" 00:16:37.172 }' 00:16:37.172 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:37.172 { 00:16:37.172 "nqn": "nqn.2016-06.io.spdk:cnode29681", 00:16:37.172 "model_number": "3uwZ| qLi.dU\"A$v(\"fJNR\"Ag(#w1k*il}-(et$Q~", 00:16:37.172 "method": "nvmf_create_subsystem", 00:16:37.172 "req_id": 1 00:16:37.172 } 00:16:37.172 Got JSON-RPC error response 00:16:37.172 response: 00:16:37.172 { 00:16:37.172 "code": -32602, 00:16:37.172 "message": "Invalid MN 3uwZ| qLi.dU\"A$v(\"fJNR\"Ag(#w1k*il}-(et$Q~" 00:16:37.172 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:37.172 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:37.172 [2024-10-06 11:11:34.736339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.432 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:37.432 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:37.432 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:37.432 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:37.432 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:37.432 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:37.691 [2024-10-06 11:11:35.141666] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:37.691 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:37.691 { 00:16:37.691 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:37.691 "listen_address": { 00:16:37.691 "trtype": "tcp", 00:16:37.691 "traddr": "", 00:16:37.691 "trsvcid": "4421" 00:16:37.691 }, 00:16:37.691 "method": "nvmf_subsystem_remove_listener", 00:16:37.691 "req_id": 1 00:16:37.691 } 00:16:37.691 Got JSON-RPC error response 00:16:37.691 response: 00:16:37.691 { 00:16:37.691 "code": -32602, 00:16:37.691 "message": "Invalid parameters" 00:16:37.691 }' 00:16:37.691 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:37.691 { 00:16:37.691 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:37.691 "listen_address": { 00:16:37.691 "trtype": "tcp", 00:16:37.691 "traddr": "", 00:16:37.691 "trsvcid": "4421" 00:16:37.691 }, 00:16:37.691 "method": "nvmf_subsystem_remove_listener", 00:16:37.691 "req_id": 1 00:16:37.691 } 00:16:37.691 Got JSON-RPC error response 00:16:37.691 response: 00:16:37.691 { 00:16:37.691 "code": -32602, 00:16:37.691 "message": "Invalid parameters" 00:16:37.691 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:37.691 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13979 -i 0 00:16:37.951 [2024-10-06 11:11:35.342279] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13979: invalid cntlid range [0-65519] 00:16:37.951 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:37.951 { 00:16:37.951 "nqn": "nqn.2016-06.io.spdk:cnode13979", 00:16:37.951 "min_cntlid": 0, 00:16:37.951 "method": "nvmf_create_subsystem", 00:16:37.951 "req_id": 1 00:16:37.951 } 00:16:37.951 Got JSON-RPC error response 00:16:37.951 response: 00:16:37.951 { 00:16:37.951 "code": -32602, 00:16:37.951 "message": "Invalid cntlid range [0-65519]" 00:16:37.951 }' 00:16:37.951 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:37.951 { 00:16:37.951 "nqn": "nqn.2016-06.io.spdk:cnode13979", 00:16:37.951 "min_cntlid": 0, 00:16:37.951 "method": "nvmf_create_subsystem", 00:16:37.951 "req_id": 1 00:16:37.951 } 00:16:37.951 Got JSON-RPC error response 00:16:37.951 response: 00:16:37.951 { 00:16:37.951 "code": -32602, 00:16:37.951 "message": "Invalid cntlid range [0-65519]" 00:16:37.951 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:37.951 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26 -i 65520 00:16:38.210 [2024-10-06 11:11:35.550958] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26: invalid cntlid range [65520-65519] 00:16:38.210 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:38.210 { 00:16:38.210 "nqn": "nqn.2016-06.io.spdk:cnode26", 00:16:38.210 "min_cntlid": 65520, 00:16:38.210 "method": "nvmf_create_subsystem", 00:16:38.210 "req_id": 1 00:16:38.210 } 00:16:38.210 Got JSON-RPC error response 00:16:38.210 response: 00:16:38.210 { 00:16:38.210 "code": -32602, 00:16:38.210 "message": "Invalid cntlid range [65520-65519]" 00:16:38.210 }' 00:16:38.210 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:38.210 { 00:16:38.210 "nqn": "nqn.2016-06.io.spdk:cnode26", 00:16:38.210 "min_cntlid": 65520, 00:16:38.210 "method": "nvmf_create_subsystem", 00:16:38.210 "req_id": 1 00:16:38.210 } 00:16:38.210 Got JSON-RPC error response 00:16:38.210 response: 00:16:38.210 { 00:16:38.210 "code": -32602, 00:16:38.210 "message": "Invalid cntlid range [65520-65519]" 00:16:38.210 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:38.210 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31076 -I 0 00:16:38.210 [2024-10-06 11:11:35.759656] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31076: invalid cntlid range [1-0] 00:16:38.474 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:38.474 { 00:16:38.474 "nqn": "nqn.2016-06.io.spdk:cnode31076", 00:16:38.474 "max_cntlid": 0, 00:16:38.474 "method": "nvmf_create_subsystem", 00:16:38.474 "req_id": 1 00:16:38.474 } 00:16:38.474 Got JSON-RPC error response 00:16:38.474 response: 00:16:38.474 { 00:16:38.474 "code": -32602, 00:16:38.474 "message": "Invalid cntlid range [1-0]" 00:16:38.474 }' 00:16:38.474 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:38.474 { 00:16:38.474 "nqn": "nqn.2016-06.io.spdk:cnode31076", 00:16:38.474 "max_cntlid": 0, 00:16:38.474 "method": "nvmf_create_subsystem", 00:16:38.474 "req_id": 1 00:16:38.474 } 00:16:38.474 Got JSON-RPC error response 00:16:38.474 response: 00:16:38.474 { 00:16:38.474 "code": -32602, 00:16:38.474 "message": "Invalid cntlid range [1-0]" 00:16:38.474 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:38.474 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3665 -I 65520 00:16:38.474 [2024-10-06 11:11:35.960321] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3665: invalid cntlid range [1-65520] 00:16:38.474 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:38.474 { 00:16:38.474 "nqn": "nqn.2016-06.io.spdk:cnode3665", 00:16:38.474 "max_cntlid": 65520, 00:16:38.474 "method": "nvmf_create_subsystem", 00:16:38.474 "req_id": 1 00:16:38.474 } 00:16:38.474 Got JSON-RPC error response 00:16:38.474 response: 00:16:38.474 { 00:16:38.474 "code": -32602, 00:16:38.474 "message": "Invalid cntlid range [1-65520]" 00:16:38.474 }' 00:16:38.474 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:38.474 { 00:16:38.474 "nqn": "nqn.2016-06.io.spdk:cnode3665", 00:16:38.474 "max_cntlid": 65520, 00:16:38.474 "method": "nvmf_create_subsystem", 00:16:38.474 "req_id": 1 00:16:38.474 } 00:16:38.474 Got JSON-RPC error response 00:16:38.474 response: 00:16:38.474 { 00:16:38.474 "code": -32602, 00:16:38.474 "message": "Invalid cntlid range [1-65520]" 00:16:38.474 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:38.474 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17534 -i 6 -I 5 00:16:38.733 [2024-10-06 11:11:36.161022] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17534: invalid cntlid range [6-5] 00:16:38.733 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:38.733 { 00:16:38.733 "nqn": "nqn.2016-06.io.spdk:cnode17534", 00:16:38.733 "min_cntlid": 6, 00:16:38.733 "max_cntlid": 5, 00:16:38.733 "method": "nvmf_create_subsystem", 00:16:38.733 "req_id": 1 00:16:38.733 } 00:16:38.733 Got JSON-RPC error response 00:16:38.733 response: 00:16:38.733 { 00:16:38.733 "code": -32602, 00:16:38.733 "message": "Invalid cntlid range [6-5]" 00:16:38.733 }' 00:16:38.733 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:38.733 { 00:16:38.733 "nqn": "nqn.2016-06.io.spdk:cnode17534", 00:16:38.733 "min_cntlid": 6, 00:16:38.733 "max_cntlid": 5, 00:16:38.733 "method": "nvmf_create_subsystem", 00:16:38.733 "req_id": 1 00:16:38.733 } 00:16:38.733 Got JSON-RPC error response 00:16:38.733 response: 00:16:38.733 { 00:16:38.733 "code": -32602, 00:16:38.733 "message": "Invalid cntlid range [6-5]" 00:16:38.733 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:38.733 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:38.733 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:38.733 { 00:16:38.733 "name": "foobar", 00:16:38.733 "method": "nvmf_delete_target", 00:16:38.733 "req_id": 1 00:16:38.733 } 00:16:38.733 Got JSON-RPC error response 00:16:38.733 response: 00:16:38.733 { 00:16:38.733 "code": -32602, 00:16:38.733 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:38.733 }' 00:16:38.733 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:38.733 { 00:16:38.733 "name": "foobar", 00:16:38.733 "method": "nvmf_delete_target", 00:16:38.733 "req_id": 1 00:16:38.733 } 00:16:38.733 Got JSON-RPC error response 00:16:38.733 response: 00:16:38.733 { 00:16:38.733 "code": -32602, 00:16:38.733 "message": "The specified target doesn't exist, cannot delete it." 00:16:38.733 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:38.733 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:38.733 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:38.733 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:38.733 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:38.733 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:38.733 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:38.733 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:38.733 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:38.992 rmmod nvme_tcp 00:16:38.992 rmmod nvme_fabrics 00:16:38.992 rmmod nvme_keyring 00:16:38.992 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:38.992 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:38.992 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:38.992 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 2013396 ']' 00:16:38.992 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 2013396 00:16:38.992 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2013396 ']' 00:16:38.992 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2013396 00:16:38.992 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:16:38.992 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:38.992 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2013396 00:16:38.992 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:38.992 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:38.992 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2013396' 00:16:38.992 killing process with pid 2013396 00:16:38.992 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2013396 00:16:38.992 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2013396 00:16:39.251 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:39.251 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:39.251 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:39.251 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:39.251 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:16:39.251 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:16:39.251 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:39.251 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:39.251 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:39.251 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.251 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:39.251 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.156 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:41.156 00:16:41.156 real 0m11.302s 00:16:41.156 user 0m18.133s 00:16:41.156 sys 0m4.875s 00:16:41.156 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:41.156 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:41.156 ************************************ 00:16:41.156 END TEST nvmf_invalid 00:16:41.156 ************************************ 00:16:41.156 11:11:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:41.156 11:11:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:41.156 11:11:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:41.156 11:11:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:41.415 ************************************ 00:16:41.415 START TEST nvmf_connect_stress 00:16:41.415 ************************************ 00:16:41.415 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:41.415 * Looking for test storage... 00:16:41.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:41.415 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:41.415 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:16:41.415 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:41.415 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:41.415 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:41.415 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:41.415 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:41.415 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.415 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:41.415 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:41.415 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:41.415 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:41.415 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:41.415 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:41.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.416 --rc genhtml_branch_coverage=1 00:16:41.416 --rc genhtml_function_coverage=1 00:16:41.416 --rc genhtml_legend=1 00:16:41.416 --rc geninfo_all_blocks=1 00:16:41.416 --rc geninfo_unexecuted_blocks=1 00:16:41.416 00:16:41.416 ' 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:41.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.416 --rc genhtml_branch_coverage=1 00:16:41.416 --rc genhtml_function_coverage=1 00:16:41.416 --rc genhtml_legend=1 00:16:41.416 --rc geninfo_all_blocks=1 00:16:41.416 --rc geninfo_unexecuted_blocks=1 00:16:41.416 00:16:41.416 ' 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:41.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.416 --rc genhtml_branch_coverage=1 00:16:41.416 --rc genhtml_function_coverage=1 00:16:41.416 --rc genhtml_legend=1 00:16:41.416 --rc geninfo_all_blocks=1 00:16:41.416 --rc geninfo_unexecuted_blocks=1 00:16:41.416 00:16:41.416 ' 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:41.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.416 --rc genhtml_branch_coverage=1 00:16:41.416 --rc genhtml_function_coverage=1 00:16:41.416 --rc genhtml_legend=1 00:16:41.416 --rc geninfo_all_blocks=1 00:16:41.416 --rc geninfo_unexecuted_blocks=1 00:16:41.416 00:16:41.416 ' 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:41.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:41.416 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.988 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:47.988 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:47.988 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:47.988 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:47.988 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:47.988 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:47.988 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:47.988 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:47.988 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:47.988 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:47.988 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:47.988 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:47.988 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:47.988 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:47.988 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:47.989 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:47.989 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:47.989 Found net devices under 0000:af:00.0: cvl_0_0 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:47.989 Found net devices under 0000:af:00.1: cvl_0_1 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:47.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:16:47.989 00:16:47.989 --- 10.0.0.2 ping statistics --- 00:16:47.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.989 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:47.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:16:47.989 00:16:47.989 --- 10.0.0.1 ping statistics --- 00:16:47.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.989 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:47.989 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=2017574 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 2017574 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2017574 ']' 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.990 [2024-10-06 11:11:44.742869] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:16:47.990 [2024-10-06 11:11:44.742911] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.990 [2024-10-06 11:11:44.802244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:47.990 [2024-10-06 11:11:44.841036] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.990 [2024-10-06 11:11:44.841077] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.990 [2024-10-06 11:11:44.841085] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.990 [2024-10-06 11:11:44.841091] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.990 [2024-10-06 11:11:44.841096] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.990 [2024-10-06 11:11:44.842053] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.990 [2024-10-06 11:11:44.842140] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:16:47.990 [2024-10-06 11:11:44.842142] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.990 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.990 [2024-10-06 11:11:44.984166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.990 [2024-10-06 11:11:45.014490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.990 NULL1 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2017645 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.990 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.250 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.250 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:48.250 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.250 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.250 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.818 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.818 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:48.818 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.818 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.818 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.076 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.076 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:49.076 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.076 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.076 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.334 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.334 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:49.334 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.334 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.334 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.594 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.594 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:49.594 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.594 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.594 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.853 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.853 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:49.853 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.853 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.853 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.421 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.421 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:50.421 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.421 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.422 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.680 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.680 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:50.680 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.680 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.680 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.938 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.938 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:50.938 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.938 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.938 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.196 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.196 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:51.196 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.196 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.196 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.455 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.455 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:51.455 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.455 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.455 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.025 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.025 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:52.025 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.025 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.025 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.284 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.284 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:52.284 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.284 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.284 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.543 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.544 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:52.544 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.544 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.544 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.803 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.803 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:52.803 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.803 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.803 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.371 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.371 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:53.371 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.371 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.371 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.631 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.631 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:53.631 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.631 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.631 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.891 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.891 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:53.891 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.891 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.891 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.150 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.150 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:54.150 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.150 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.150 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.409 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.409 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:54.409 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.409 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.409 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.976 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.976 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:54.976 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.976 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.976 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.235 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.235 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:55.235 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.235 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.235 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.494 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.494 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:55.494 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.494 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.494 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.753 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.753 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:55.753 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.753 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.753 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.012 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.012 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:56.012 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.012 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.012 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.580 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.580 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:56.580 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.580 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.580 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.838 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.838 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:56.838 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.838 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.838 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.097 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.097 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:57.097 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.097 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.097 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.356 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.356 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:57.356 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.356 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.356 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.615 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2017645 00:16:57.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2017645) - No such process 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2017645 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:57.875 rmmod nvme_tcp 00:16:57.875 rmmod nvme_fabrics 00:16:57.875 rmmod nvme_keyring 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 2017574 ']' 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 2017574 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2017574 ']' 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2017574 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2017574 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2017574' 00:16:57.875 killing process with pid 2017574 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2017574 00:16:57.875 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2017574 00:16:58.135 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:58.135 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:58.135 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:58.135 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:58.135 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:16:58.135 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:58.135 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:16:58.135 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:58.135 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:58.135 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.135 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.135 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.040 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:00.040 00:17:00.040 real 0m18.853s 00:17:00.040 user 0m39.252s 00:17:00.040 sys 0m8.520s 00:17:00.040 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:00.040 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.040 ************************************ 00:17:00.040 END TEST nvmf_connect_stress 00:17:00.040 ************************************ 00:17:00.299 11:11:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:00.299 11:11:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:00.299 11:11:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:00.299 11:11:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:00.299 ************************************ 00:17:00.299 START TEST nvmf_fused_ordering 00:17:00.299 ************************************ 00:17:00.299 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:00.299 * Looking for test storage... 00:17:00.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.299 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:00.299 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:17:00.299 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:00.299 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:00.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.300 --rc genhtml_branch_coverage=1 00:17:00.300 --rc genhtml_function_coverage=1 00:17:00.300 --rc genhtml_legend=1 00:17:00.300 --rc geninfo_all_blocks=1 00:17:00.300 --rc geninfo_unexecuted_blocks=1 00:17:00.300 00:17:00.300 ' 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:00.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.300 --rc genhtml_branch_coverage=1 00:17:00.300 --rc genhtml_function_coverage=1 00:17:00.300 --rc genhtml_legend=1 00:17:00.300 --rc geninfo_all_blocks=1 00:17:00.300 --rc geninfo_unexecuted_blocks=1 00:17:00.300 00:17:00.300 ' 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:00.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.300 --rc genhtml_branch_coverage=1 00:17:00.300 --rc genhtml_function_coverage=1 00:17:00.300 --rc genhtml_legend=1 00:17:00.300 --rc geninfo_all_blocks=1 00:17:00.300 --rc geninfo_unexecuted_blocks=1 00:17:00.300 00:17:00.300 ' 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:00.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.300 --rc genhtml_branch_coverage=1 00:17:00.300 --rc genhtml_function_coverage=1 00:17:00.300 --rc genhtml_legend=1 00:17:00.300 --rc geninfo_all_blocks=1 00:17:00.300 --rc geninfo_unexecuted_blocks=1 00:17:00.300 00:17:00.300 ' 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.300 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.559 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.559 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.559 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.559 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:00.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:00.560 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:05.835 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:05.835 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:05.835 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:05.836 Found net devices under 0000:af:00.0: cvl_0_0 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:05.836 Found net devices under 0000:af:00.1: cvl_0_1 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:05.836 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:05.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:17:05.836 00:17:05.836 --- 10.0.0.2 ping statistics --- 00:17:05.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.836 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:05.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:17:05.836 00:17:05.836 --- 10.0.0.1 ping statistics --- 00:17:05.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.836 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=2022878 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 2022878 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2022878 ']' 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.836 [2024-10-06 11:12:03.142617] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:17:05.836 [2024-10-06 11:12:03.142662] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.836 [2024-10-06 11:12:03.196188] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.836 [2024-10-06 11:12:03.238613] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.836 [2024-10-06 11:12:03.238650] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.836 [2024-10-06 11:12:03.238657] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.836 [2024-10-06 11:12:03.238663] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.836 [2024-10-06 11:12:03.238669] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.836 [2024-10-06 11:12:03.239179] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.836 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.836 [2024-10-06 11:12:03.367744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.837 [2024-10-06 11:12:03.383933] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.837 NULL1 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.837 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:06.096 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.096 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:06.096 [2024-10-06 11:12:03.436015] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:17:06.096 [2024-10-06 11:12:03.436047] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2022997 ] 00:17:06.355 Attached to nqn.2016-06.io.spdk:cnode1 00:17:06.355 Namespace ID: 1 size: 1GB 00:17:06.355 fused_ordering(0) 00:17:06.355 fused_ordering(1) 00:17:06.355 fused_ordering(2) 00:17:06.355 fused_ordering(3) 00:17:06.355 fused_ordering(4) 00:17:06.355 fused_ordering(5) 00:17:06.355 fused_ordering(6) 00:17:06.355 fused_ordering(7) 00:17:06.355 fused_ordering(8) 00:17:06.355 fused_ordering(9) 00:17:06.355 fused_ordering(10) 00:17:06.355 fused_ordering(11) 00:17:06.355 fused_ordering(12) 00:17:06.355 fused_ordering(13) 00:17:06.355 fused_ordering(14) 00:17:06.355 fused_ordering(15) 00:17:06.355 fused_ordering(16) 00:17:06.355 fused_ordering(17) 00:17:06.355 fused_ordering(18) 00:17:06.355 fused_ordering(19) 00:17:06.355 fused_ordering(20) 00:17:06.355 fused_ordering(21) 00:17:06.355 fused_ordering(22) 00:17:06.355 fused_ordering(23) 00:17:06.355 fused_ordering(24) 00:17:06.355 fused_ordering(25) 00:17:06.355 fused_ordering(26) 00:17:06.355 fused_ordering(27) 00:17:06.355 fused_ordering(28) 00:17:06.355 fused_ordering(29) 00:17:06.355 fused_ordering(30) 00:17:06.355 fused_ordering(31) 00:17:06.355 fused_ordering(32) 00:17:06.355 fused_ordering(33) 00:17:06.355 fused_ordering(34) 00:17:06.355 fused_ordering(35) 00:17:06.355 fused_ordering(36) 00:17:06.355 fused_ordering(37) 00:17:06.355 fused_ordering(38) 00:17:06.355 fused_ordering(39) 00:17:06.355 fused_ordering(40) 00:17:06.355 fused_ordering(41) 00:17:06.355 fused_ordering(42) 00:17:06.355 fused_ordering(43) 00:17:06.355 fused_ordering(44) 00:17:06.355 fused_ordering(45) 00:17:06.355 fused_ordering(46) 00:17:06.355 fused_ordering(47) 00:17:06.355 fused_ordering(48) 00:17:06.355 fused_ordering(49) 00:17:06.355 fused_ordering(50) 00:17:06.355 fused_ordering(51) 00:17:06.355 fused_ordering(52) 00:17:06.355 fused_ordering(53) 00:17:06.355 fused_ordering(54) 00:17:06.355 fused_ordering(55) 00:17:06.355 fused_ordering(56) 00:17:06.355 fused_ordering(57) 00:17:06.355 fused_ordering(58) 00:17:06.355 fused_ordering(59) 00:17:06.355 fused_ordering(60) 00:17:06.355 fused_ordering(61) 00:17:06.355 fused_ordering(62) 00:17:06.355 fused_ordering(63) 00:17:06.355 fused_ordering(64) 00:17:06.355 fused_ordering(65) 00:17:06.355 fused_ordering(66) 00:17:06.355 fused_ordering(67) 00:17:06.355 fused_ordering(68) 00:17:06.355 fused_ordering(69) 00:17:06.355 fused_ordering(70) 00:17:06.355 fused_ordering(71) 00:17:06.355 fused_ordering(72) 00:17:06.355 fused_ordering(73) 00:17:06.355 fused_ordering(74) 00:17:06.355 fused_ordering(75) 00:17:06.355 fused_ordering(76) 00:17:06.355 fused_ordering(77) 00:17:06.355 fused_ordering(78) 00:17:06.355 fused_ordering(79) 00:17:06.355 fused_ordering(80) 00:17:06.355 fused_ordering(81) 00:17:06.355 fused_ordering(82) 00:17:06.355 fused_ordering(83) 00:17:06.355 fused_ordering(84) 00:17:06.355 fused_ordering(85) 00:17:06.355 fused_ordering(86) 00:17:06.355 fused_ordering(87) 00:17:06.355 fused_ordering(88) 00:17:06.355 fused_ordering(89) 00:17:06.355 fused_ordering(90) 00:17:06.355 fused_ordering(91) 00:17:06.355 fused_ordering(92) 00:17:06.355 fused_ordering(93) 00:17:06.355 fused_ordering(94) 00:17:06.355 fused_ordering(95) 00:17:06.355 fused_ordering(96) 00:17:06.355 fused_ordering(97) 00:17:06.355 fused_ordering(98) 00:17:06.355 fused_ordering(99) 00:17:06.355 fused_ordering(100) 00:17:06.355 fused_ordering(101) 00:17:06.355 fused_ordering(102) 00:17:06.355 fused_ordering(103) 00:17:06.355 fused_ordering(104) 00:17:06.355 fused_ordering(105) 00:17:06.355 fused_ordering(106) 00:17:06.355 fused_ordering(107) 00:17:06.355 fused_ordering(108) 00:17:06.355 fused_ordering(109) 00:17:06.355 fused_ordering(110) 00:17:06.355 fused_ordering(111) 00:17:06.355 fused_ordering(112) 00:17:06.355 fused_ordering(113) 00:17:06.355 fused_ordering(114) 00:17:06.355 fused_ordering(115) 00:17:06.355 fused_ordering(116) 00:17:06.355 fused_ordering(117) 00:17:06.355 fused_ordering(118) 00:17:06.356 fused_ordering(119) 00:17:06.356 fused_ordering(120) 00:17:06.356 fused_ordering(121) 00:17:06.356 fused_ordering(122) 00:17:06.356 fused_ordering(123) 00:17:06.356 fused_ordering(124) 00:17:06.356 fused_ordering(125) 00:17:06.356 fused_ordering(126) 00:17:06.356 fused_ordering(127) 00:17:06.356 fused_ordering(128) 00:17:06.356 fused_ordering(129) 00:17:06.356 fused_ordering(130) 00:17:06.356 fused_ordering(131) 00:17:06.356 fused_ordering(132) 00:17:06.356 fused_ordering(133) 00:17:06.356 fused_ordering(134) 00:17:06.356 fused_ordering(135) 00:17:06.356 fused_ordering(136) 00:17:06.356 fused_ordering(137) 00:17:06.356 fused_ordering(138) 00:17:06.356 fused_ordering(139) 00:17:06.356 fused_ordering(140) 00:17:06.356 fused_ordering(141) 00:17:06.356 fused_ordering(142) 00:17:06.356 fused_ordering(143) 00:17:06.356 fused_ordering(144) 00:17:06.356 fused_ordering(145) 00:17:06.356 fused_ordering(146) 00:17:06.356 fused_ordering(147) 00:17:06.356 fused_ordering(148) 00:17:06.356 fused_ordering(149) 00:17:06.356 fused_ordering(150) 00:17:06.356 fused_ordering(151) 00:17:06.356 fused_ordering(152) 00:17:06.356 fused_ordering(153) 00:17:06.356 fused_ordering(154) 00:17:06.356 fused_ordering(155) 00:17:06.356 fused_ordering(156) 00:17:06.356 fused_ordering(157) 00:17:06.356 fused_ordering(158) 00:17:06.356 fused_ordering(159) 00:17:06.356 fused_ordering(160) 00:17:06.356 fused_ordering(161) 00:17:06.356 fused_ordering(162) 00:17:06.356 fused_ordering(163) 00:17:06.356 fused_ordering(164) 00:17:06.356 fused_ordering(165) 00:17:06.356 fused_ordering(166) 00:17:06.356 fused_ordering(167) 00:17:06.356 fused_ordering(168) 00:17:06.356 fused_ordering(169) 00:17:06.356 fused_ordering(170) 00:17:06.356 fused_ordering(171) 00:17:06.356 fused_ordering(172) 00:17:06.356 fused_ordering(173) 00:17:06.356 fused_ordering(174) 00:17:06.356 fused_ordering(175) 00:17:06.356 fused_ordering(176) 00:17:06.356 fused_ordering(177) 00:17:06.356 fused_ordering(178) 00:17:06.356 fused_ordering(179) 00:17:06.356 fused_ordering(180) 00:17:06.356 fused_ordering(181) 00:17:06.356 fused_ordering(182) 00:17:06.356 fused_ordering(183) 00:17:06.356 fused_ordering(184) 00:17:06.356 fused_ordering(185) 00:17:06.356 fused_ordering(186) 00:17:06.356 fused_ordering(187) 00:17:06.356 fused_ordering(188) 00:17:06.356 fused_ordering(189) 00:17:06.356 fused_ordering(190) 00:17:06.356 fused_ordering(191) 00:17:06.356 fused_ordering(192) 00:17:06.356 fused_ordering(193) 00:17:06.356 fused_ordering(194) 00:17:06.356 fused_ordering(195) 00:17:06.356 fused_ordering(196) 00:17:06.356 fused_ordering(197) 00:17:06.356 fused_ordering(198) 00:17:06.356 fused_ordering(199) 00:17:06.356 fused_ordering(200) 00:17:06.356 fused_ordering(201) 00:17:06.356 fused_ordering(202) 00:17:06.356 fused_ordering(203) 00:17:06.356 fused_ordering(204) 00:17:06.356 fused_ordering(205) 00:17:06.615 fused_ordering(206) 00:17:06.615 fused_ordering(207) 00:17:06.615 fused_ordering(208) 00:17:06.615 fused_ordering(209) 00:17:06.615 fused_ordering(210) 00:17:06.615 fused_ordering(211) 00:17:06.615 fused_ordering(212) 00:17:06.615 fused_ordering(213) 00:17:06.615 fused_ordering(214) 00:17:06.615 fused_ordering(215) 00:17:06.615 fused_ordering(216) 00:17:06.615 fused_ordering(217) 00:17:06.615 fused_ordering(218) 00:17:06.615 fused_ordering(219) 00:17:06.615 fused_ordering(220) 00:17:06.615 fused_ordering(221) 00:17:06.615 fused_ordering(222) 00:17:06.615 fused_ordering(223) 00:17:06.615 fused_ordering(224) 00:17:06.615 fused_ordering(225) 00:17:06.615 fused_ordering(226) 00:17:06.615 fused_ordering(227) 00:17:06.615 fused_ordering(228) 00:17:06.615 fused_ordering(229) 00:17:06.615 fused_ordering(230) 00:17:06.615 fused_ordering(231) 00:17:06.615 fused_ordering(232) 00:17:06.615 fused_ordering(233) 00:17:06.615 fused_ordering(234) 00:17:06.615 fused_ordering(235) 00:17:06.615 fused_ordering(236) 00:17:06.615 fused_ordering(237) 00:17:06.615 fused_ordering(238) 00:17:06.615 fused_ordering(239) 00:17:06.615 fused_ordering(240) 00:17:06.615 fused_ordering(241) 00:17:06.615 fused_ordering(242) 00:17:06.615 fused_ordering(243) 00:17:06.615 fused_ordering(244) 00:17:06.615 fused_ordering(245) 00:17:06.615 fused_ordering(246) 00:17:06.615 fused_ordering(247) 00:17:06.615 fused_ordering(248) 00:17:06.615 fused_ordering(249) 00:17:06.615 fused_ordering(250) 00:17:06.615 fused_ordering(251) 00:17:06.615 fused_ordering(252) 00:17:06.615 fused_ordering(253) 00:17:06.615 fused_ordering(254) 00:17:06.615 fused_ordering(255) 00:17:06.615 fused_ordering(256) 00:17:06.615 fused_ordering(257) 00:17:06.615 fused_ordering(258) 00:17:06.615 fused_ordering(259) 00:17:06.615 fused_ordering(260) 00:17:06.615 fused_ordering(261) 00:17:06.615 fused_ordering(262) 00:17:06.615 fused_ordering(263) 00:17:06.615 fused_ordering(264) 00:17:06.615 fused_ordering(265) 00:17:06.615 fused_ordering(266) 00:17:06.615 fused_ordering(267) 00:17:06.615 fused_ordering(268) 00:17:06.615 fused_ordering(269) 00:17:06.615 fused_ordering(270) 00:17:06.615 fused_ordering(271) 00:17:06.615 fused_ordering(272) 00:17:06.615 fused_ordering(273) 00:17:06.615 fused_ordering(274) 00:17:06.615 fused_ordering(275) 00:17:06.615 fused_ordering(276) 00:17:06.615 fused_ordering(277) 00:17:06.615 fused_ordering(278) 00:17:06.615 fused_ordering(279) 00:17:06.615 fused_ordering(280) 00:17:06.615 fused_ordering(281) 00:17:06.615 fused_ordering(282) 00:17:06.615 fused_ordering(283) 00:17:06.615 fused_ordering(284) 00:17:06.615 fused_ordering(285) 00:17:06.615 fused_ordering(286) 00:17:06.615 fused_ordering(287) 00:17:06.615 fused_ordering(288) 00:17:06.615 fused_ordering(289) 00:17:06.615 fused_ordering(290) 00:17:06.615 fused_ordering(291) 00:17:06.615 fused_ordering(292) 00:17:06.615 fused_ordering(293) 00:17:06.615 fused_ordering(294) 00:17:06.615 fused_ordering(295) 00:17:06.615 fused_ordering(296) 00:17:06.615 fused_ordering(297) 00:17:06.615 fused_ordering(298) 00:17:06.615 fused_ordering(299) 00:17:06.615 fused_ordering(300) 00:17:06.615 fused_ordering(301) 00:17:06.615 fused_ordering(302) 00:17:06.615 fused_ordering(303) 00:17:06.615 fused_ordering(304) 00:17:06.615 fused_ordering(305) 00:17:06.615 fused_ordering(306) 00:17:06.615 fused_ordering(307) 00:17:06.615 fused_ordering(308) 00:17:06.615 fused_ordering(309) 00:17:06.615 fused_ordering(310) 00:17:06.615 fused_ordering(311) 00:17:06.615 fused_ordering(312) 00:17:06.615 fused_ordering(313) 00:17:06.615 fused_ordering(314) 00:17:06.615 fused_ordering(315) 00:17:06.615 fused_ordering(316) 00:17:06.615 fused_ordering(317) 00:17:06.615 fused_ordering(318) 00:17:06.615 fused_ordering(319) 00:17:06.615 fused_ordering(320) 00:17:06.615 fused_ordering(321) 00:17:06.615 fused_ordering(322) 00:17:06.615 fused_ordering(323) 00:17:06.615 fused_ordering(324) 00:17:06.615 fused_ordering(325) 00:17:06.615 fused_ordering(326) 00:17:06.615 fused_ordering(327) 00:17:06.615 fused_ordering(328) 00:17:06.615 fused_ordering(329) 00:17:06.615 fused_ordering(330) 00:17:06.615 fused_ordering(331) 00:17:06.615 fused_ordering(332) 00:17:06.615 fused_ordering(333) 00:17:06.615 fused_ordering(334) 00:17:06.615 fused_ordering(335) 00:17:06.615 fused_ordering(336) 00:17:06.615 fused_ordering(337) 00:17:06.615 fused_ordering(338) 00:17:06.616 fused_ordering(339) 00:17:06.616 fused_ordering(340) 00:17:06.616 fused_ordering(341) 00:17:06.616 fused_ordering(342) 00:17:06.616 fused_ordering(343) 00:17:06.616 fused_ordering(344) 00:17:06.616 fused_ordering(345) 00:17:06.616 fused_ordering(346) 00:17:06.616 fused_ordering(347) 00:17:06.616 fused_ordering(348) 00:17:06.616 fused_ordering(349) 00:17:06.616 fused_ordering(350) 00:17:06.616 fused_ordering(351) 00:17:06.616 fused_ordering(352) 00:17:06.616 fused_ordering(353) 00:17:06.616 fused_ordering(354) 00:17:06.616 fused_ordering(355) 00:17:06.616 fused_ordering(356) 00:17:06.616 fused_ordering(357) 00:17:06.616 fused_ordering(358) 00:17:06.616 fused_ordering(359) 00:17:06.616 fused_ordering(360) 00:17:06.616 fused_ordering(361) 00:17:06.616 fused_ordering(362) 00:17:06.616 fused_ordering(363) 00:17:06.616 fused_ordering(364) 00:17:06.616 fused_ordering(365) 00:17:06.616 fused_ordering(366) 00:17:06.616 fused_ordering(367) 00:17:06.616 fused_ordering(368) 00:17:06.616 fused_ordering(369) 00:17:06.616 fused_ordering(370) 00:17:06.616 fused_ordering(371) 00:17:06.616 fused_ordering(372) 00:17:06.616 fused_ordering(373) 00:17:06.616 fused_ordering(374) 00:17:06.616 fused_ordering(375) 00:17:06.616 fused_ordering(376) 00:17:06.616 fused_ordering(377) 00:17:06.616 fused_ordering(378) 00:17:06.616 fused_ordering(379) 00:17:06.616 fused_ordering(380) 00:17:06.616 fused_ordering(381) 00:17:06.616 fused_ordering(382) 00:17:06.616 fused_ordering(383) 00:17:06.616 fused_ordering(384) 00:17:06.616 fused_ordering(385) 00:17:06.616 fused_ordering(386) 00:17:06.616 fused_ordering(387) 00:17:06.616 fused_ordering(388) 00:17:06.616 fused_ordering(389) 00:17:06.616 fused_ordering(390) 00:17:06.616 fused_ordering(391) 00:17:06.616 fused_ordering(392) 00:17:06.616 fused_ordering(393) 00:17:06.616 fused_ordering(394) 00:17:06.616 fused_ordering(395) 00:17:06.616 fused_ordering(396) 00:17:06.616 fused_ordering(397) 00:17:06.616 fused_ordering(398) 00:17:06.616 fused_ordering(399) 00:17:06.616 fused_ordering(400) 00:17:06.616 fused_ordering(401) 00:17:06.616 fused_ordering(402) 00:17:06.616 fused_ordering(403) 00:17:06.616 fused_ordering(404) 00:17:06.616 fused_ordering(405) 00:17:06.616 fused_ordering(406) 00:17:06.616 fused_ordering(407) 00:17:06.616 fused_ordering(408) 00:17:06.616 fused_ordering(409) 00:17:06.616 fused_ordering(410) 00:17:06.876 fused_ordering(411) 00:17:06.876 fused_ordering(412) 00:17:06.876 fused_ordering(413) 00:17:06.876 fused_ordering(414) 00:17:06.876 fused_ordering(415) 00:17:06.876 fused_ordering(416) 00:17:06.876 fused_ordering(417) 00:17:06.876 fused_ordering(418) 00:17:06.876 fused_ordering(419) 00:17:06.876 fused_ordering(420) 00:17:06.876 fused_ordering(421) 00:17:06.876 fused_ordering(422) 00:17:06.876 fused_ordering(423) 00:17:06.876 fused_ordering(424) 00:17:06.876 fused_ordering(425) 00:17:06.876 fused_ordering(426) 00:17:06.876 fused_ordering(427) 00:17:06.876 fused_ordering(428) 00:17:06.876 fused_ordering(429) 00:17:06.876 fused_ordering(430) 00:17:06.876 fused_ordering(431) 00:17:06.876 fused_ordering(432) 00:17:06.876 fused_ordering(433) 00:17:06.876 fused_ordering(434) 00:17:06.876 fused_ordering(435) 00:17:06.876 fused_ordering(436) 00:17:06.876 fused_ordering(437) 00:17:06.876 fused_ordering(438) 00:17:06.876 fused_ordering(439) 00:17:06.876 fused_ordering(440) 00:17:06.876 fused_ordering(441) 00:17:06.876 fused_ordering(442) 00:17:06.876 fused_ordering(443) 00:17:06.876 fused_ordering(444) 00:17:06.876 fused_ordering(445) 00:17:06.876 fused_ordering(446) 00:17:06.876 fused_ordering(447) 00:17:06.876 fused_ordering(448) 00:17:06.876 fused_ordering(449) 00:17:06.876 fused_ordering(450) 00:17:06.876 fused_ordering(451) 00:17:06.876 fused_ordering(452) 00:17:06.876 fused_ordering(453) 00:17:06.876 fused_ordering(454) 00:17:06.876 fused_ordering(455) 00:17:06.876 fused_ordering(456) 00:17:06.876 fused_ordering(457) 00:17:06.876 fused_ordering(458) 00:17:06.876 fused_ordering(459) 00:17:06.876 fused_ordering(460) 00:17:06.876 fused_ordering(461) 00:17:06.876 fused_ordering(462) 00:17:06.876 fused_ordering(463) 00:17:06.876 fused_ordering(464) 00:17:06.876 fused_ordering(465) 00:17:06.876 fused_ordering(466) 00:17:06.876 fused_ordering(467) 00:17:06.876 fused_ordering(468) 00:17:06.876 fused_ordering(469) 00:17:06.876 fused_ordering(470) 00:17:06.876 fused_ordering(471) 00:17:06.876 fused_ordering(472) 00:17:06.876 fused_ordering(473) 00:17:06.876 fused_ordering(474) 00:17:06.876 fused_ordering(475) 00:17:06.876 fused_ordering(476) 00:17:06.876 fused_ordering(477) 00:17:06.876 fused_ordering(478) 00:17:06.876 fused_ordering(479) 00:17:06.876 fused_ordering(480) 00:17:06.876 fused_ordering(481) 00:17:06.876 fused_ordering(482) 00:17:06.876 fused_ordering(483) 00:17:06.876 fused_ordering(484) 00:17:06.876 fused_ordering(485) 00:17:06.876 fused_ordering(486) 00:17:06.876 fused_ordering(487) 00:17:06.876 fused_ordering(488) 00:17:06.876 fused_ordering(489) 00:17:06.876 fused_ordering(490) 00:17:06.876 fused_ordering(491) 00:17:06.876 fused_ordering(492) 00:17:06.876 fused_ordering(493) 00:17:06.876 fused_ordering(494) 00:17:06.876 fused_ordering(495) 00:17:06.876 fused_ordering(496) 00:17:06.876 fused_ordering(497) 00:17:06.876 fused_ordering(498) 00:17:06.876 fused_ordering(499) 00:17:06.876 fused_ordering(500) 00:17:06.876 fused_ordering(501) 00:17:06.876 fused_ordering(502) 00:17:06.876 fused_ordering(503) 00:17:06.876 fused_ordering(504) 00:17:06.876 fused_ordering(505) 00:17:06.876 fused_ordering(506) 00:17:06.876 fused_ordering(507) 00:17:06.876 fused_ordering(508) 00:17:06.876 fused_ordering(509) 00:17:06.876 fused_ordering(510) 00:17:06.876 fused_ordering(511) 00:17:06.876 fused_ordering(512) 00:17:06.876 fused_ordering(513) 00:17:06.876 fused_ordering(514) 00:17:06.876 fused_ordering(515) 00:17:06.876 fused_ordering(516) 00:17:06.876 fused_ordering(517) 00:17:06.876 fused_ordering(518) 00:17:06.876 fused_ordering(519) 00:17:06.876 fused_ordering(520) 00:17:06.876 fused_ordering(521) 00:17:06.876 fused_ordering(522) 00:17:06.876 fused_ordering(523) 00:17:06.876 fused_ordering(524) 00:17:06.876 fused_ordering(525) 00:17:06.876 fused_ordering(526) 00:17:06.876 fused_ordering(527) 00:17:06.876 fused_ordering(528) 00:17:06.876 fused_ordering(529) 00:17:06.876 fused_ordering(530) 00:17:06.876 fused_ordering(531) 00:17:06.876 fused_ordering(532) 00:17:06.876 fused_ordering(533) 00:17:06.876 fused_ordering(534) 00:17:06.876 fused_ordering(535) 00:17:06.876 fused_ordering(536) 00:17:06.876 fused_ordering(537) 00:17:06.876 fused_ordering(538) 00:17:06.876 fused_ordering(539) 00:17:06.876 fused_ordering(540) 00:17:06.876 fused_ordering(541) 00:17:06.876 fused_ordering(542) 00:17:06.876 fused_ordering(543) 00:17:06.876 fused_ordering(544) 00:17:06.876 fused_ordering(545) 00:17:06.876 fused_ordering(546) 00:17:06.876 fused_ordering(547) 00:17:06.876 fused_ordering(548) 00:17:06.876 fused_ordering(549) 00:17:06.876 fused_ordering(550) 00:17:06.876 fused_ordering(551) 00:17:06.876 fused_ordering(552) 00:17:06.876 fused_ordering(553) 00:17:06.876 fused_ordering(554) 00:17:06.876 fused_ordering(555) 00:17:06.876 fused_ordering(556) 00:17:06.876 fused_ordering(557) 00:17:06.876 fused_ordering(558) 00:17:06.876 fused_ordering(559) 00:17:06.876 fused_ordering(560) 00:17:06.876 fused_ordering(561) 00:17:06.876 fused_ordering(562) 00:17:06.876 fused_ordering(563) 00:17:06.876 fused_ordering(564) 00:17:06.876 fused_ordering(565) 00:17:06.876 fused_ordering(566) 00:17:06.876 fused_ordering(567) 00:17:06.876 fused_ordering(568) 00:17:06.876 fused_ordering(569) 00:17:06.876 fused_ordering(570) 00:17:06.876 fused_ordering(571) 00:17:06.876 fused_ordering(572) 00:17:06.876 fused_ordering(573) 00:17:06.876 fused_ordering(574) 00:17:06.876 fused_ordering(575) 00:17:06.876 fused_ordering(576) 00:17:06.876 fused_ordering(577) 00:17:06.876 fused_ordering(578) 00:17:06.876 fused_ordering(579) 00:17:06.876 fused_ordering(580) 00:17:06.876 fused_ordering(581) 00:17:06.876 fused_ordering(582) 00:17:06.876 fused_ordering(583) 00:17:06.876 fused_ordering(584) 00:17:06.876 fused_ordering(585) 00:17:06.876 fused_ordering(586) 00:17:06.876 fused_ordering(587) 00:17:06.876 fused_ordering(588) 00:17:06.876 fused_ordering(589) 00:17:06.876 fused_ordering(590) 00:17:06.876 fused_ordering(591) 00:17:06.876 fused_ordering(592) 00:17:06.876 fused_ordering(593) 00:17:06.876 fused_ordering(594) 00:17:06.876 fused_ordering(595) 00:17:06.876 fused_ordering(596) 00:17:06.876 fused_ordering(597) 00:17:06.876 fused_ordering(598) 00:17:06.876 fused_ordering(599) 00:17:06.876 fused_ordering(600) 00:17:06.876 fused_ordering(601) 00:17:06.876 fused_ordering(602) 00:17:06.876 fused_ordering(603) 00:17:06.876 fused_ordering(604) 00:17:06.876 fused_ordering(605) 00:17:06.876 fused_ordering(606) 00:17:06.876 fused_ordering(607) 00:17:06.876 fused_ordering(608) 00:17:06.876 fused_ordering(609) 00:17:06.876 fused_ordering(610) 00:17:06.876 fused_ordering(611) 00:17:06.876 fused_ordering(612) 00:17:06.876 fused_ordering(613) 00:17:06.876 fused_ordering(614) 00:17:06.876 fused_ordering(615) 00:17:07.445 fused_ordering(616) 00:17:07.445 fused_ordering(617) 00:17:07.445 fused_ordering(618) 00:17:07.445 fused_ordering(619) 00:17:07.445 fused_ordering(620) 00:17:07.445 fused_ordering(621) 00:17:07.445 fused_ordering(622) 00:17:07.445 fused_ordering(623) 00:17:07.445 fused_ordering(624) 00:17:07.445 fused_ordering(625) 00:17:07.445 fused_ordering(626) 00:17:07.445 fused_ordering(627) 00:17:07.445 fused_ordering(628) 00:17:07.445 fused_ordering(629) 00:17:07.445 fused_ordering(630) 00:17:07.445 fused_ordering(631) 00:17:07.445 fused_ordering(632) 00:17:07.445 fused_ordering(633) 00:17:07.445 fused_ordering(634) 00:17:07.445 fused_ordering(635) 00:17:07.445 fused_ordering(636) 00:17:07.445 fused_ordering(637) 00:17:07.445 fused_ordering(638) 00:17:07.445 fused_ordering(639) 00:17:07.445 fused_ordering(640) 00:17:07.445 fused_ordering(641) 00:17:07.445 fused_ordering(642) 00:17:07.445 fused_ordering(643) 00:17:07.445 fused_ordering(644) 00:17:07.445 fused_ordering(645) 00:17:07.445 fused_ordering(646) 00:17:07.445 fused_ordering(647) 00:17:07.445 fused_ordering(648) 00:17:07.445 fused_ordering(649) 00:17:07.445 fused_ordering(650) 00:17:07.445 fused_ordering(651) 00:17:07.445 fused_ordering(652) 00:17:07.445 fused_ordering(653) 00:17:07.445 fused_ordering(654) 00:17:07.445 fused_ordering(655) 00:17:07.445 fused_ordering(656) 00:17:07.445 fused_ordering(657) 00:17:07.445 fused_ordering(658) 00:17:07.445 fused_ordering(659) 00:17:07.445 fused_ordering(660) 00:17:07.445 fused_ordering(661) 00:17:07.445 fused_ordering(662) 00:17:07.445 fused_ordering(663) 00:17:07.445 fused_ordering(664) 00:17:07.445 fused_ordering(665) 00:17:07.445 fused_ordering(666) 00:17:07.445 fused_ordering(667) 00:17:07.445 fused_ordering(668) 00:17:07.445 fused_ordering(669) 00:17:07.445 fused_ordering(670) 00:17:07.445 fused_ordering(671) 00:17:07.445 fused_ordering(672) 00:17:07.445 fused_ordering(673) 00:17:07.445 fused_ordering(674) 00:17:07.445 fused_ordering(675) 00:17:07.445 fused_ordering(676) 00:17:07.445 fused_ordering(677) 00:17:07.445 fused_ordering(678) 00:17:07.445 fused_ordering(679) 00:17:07.445 fused_ordering(680) 00:17:07.445 fused_ordering(681) 00:17:07.445 fused_ordering(682) 00:17:07.445 fused_ordering(683) 00:17:07.445 fused_ordering(684) 00:17:07.445 fused_ordering(685) 00:17:07.445 fused_ordering(686) 00:17:07.445 fused_ordering(687) 00:17:07.445 fused_ordering(688) 00:17:07.445 fused_ordering(689) 00:17:07.445 fused_ordering(690) 00:17:07.445 fused_ordering(691) 00:17:07.445 fused_ordering(692) 00:17:07.445 fused_ordering(693) 00:17:07.445 fused_ordering(694) 00:17:07.445 fused_ordering(695) 00:17:07.445 fused_ordering(696) 00:17:07.445 fused_ordering(697) 00:17:07.445 fused_ordering(698) 00:17:07.445 fused_ordering(699) 00:17:07.445 fused_ordering(700) 00:17:07.445 fused_ordering(701) 00:17:07.445 fused_ordering(702) 00:17:07.445 fused_ordering(703) 00:17:07.445 fused_ordering(704) 00:17:07.445 fused_ordering(705) 00:17:07.445 fused_ordering(706) 00:17:07.445 fused_ordering(707) 00:17:07.445 fused_ordering(708) 00:17:07.445 fused_ordering(709) 00:17:07.445 fused_ordering(710) 00:17:07.445 fused_ordering(711) 00:17:07.445 fused_ordering(712) 00:17:07.445 fused_ordering(713) 00:17:07.445 fused_ordering(714) 00:17:07.445 fused_ordering(715) 00:17:07.445 fused_ordering(716) 00:17:07.445 fused_ordering(717) 00:17:07.445 fused_ordering(718) 00:17:07.445 fused_ordering(719) 00:17:07.445 fused_ordering(720) 00:17:07.445 fused_ordering(721) 00:17:07.445 fused_ordering(722) 00:17:07.445 fused_ordering(723) 00:17:07.445 fused_ordering(724) 00:17:07.445 fused_ordering(725) 00:17:07.445 fused_ordering(726) 00:17:07.445 fused_ordering(727) 00:17:07.445 fused_ordering(728) 00:17:07.445 fused_ordering(729) 00:17:07.445 fused_ordering(730) 00:17:07.445 fused_ordering(731) 00:17:07.445 fused_ordering(732) 00:17:07.445 fused_ordering(733) 00:17:07.445 fused_ordering(734) 00:17:07.445 fused_ordering(735) 00:17:07.445 fused_ordering(736) 00:17:07.445 fused_ordering(737) 00:17:07.445 fused_ordering(738) 00:17:07.445 fused_ordering(739) 00:17:07.445 fused_ordering(740) 00:17:07.445 fused_ordering(741) 00:17:07.445 fused_ordering(742) 00:17:07.445 fused_ordering(743) 00:17:07.445 fused_ordering(744) 00:17:07.445 fused_ordering(745) 00:17:07.445 fused_ordering(746) 00:17:07.445 fused_ordering(747) 00:17:07.445 fused_ordering(748) 00:17:07.445 fused_ordering(749) 00:17:07.445 fused_ordering(750) 00:17:07.445 fused_ordering(751) 00:17:07.445 fused_ordering(752) 00:17:07.445 fused_ordering(753) 00:17:07.445 fused_ordering(754) 00:17:07.445 fused_ordering(755) 00:17:07.445 fused_ordering(756) 00:17:07.445 fused_ordering(757) 00:17:07.445 fused_ordering(758) 00:17:07.445 fused_ordering(759) 00:17:07.445 fused_ordering(760) 00:17:07.445 fused_ordering(761) 00:17:07.445 fused_ordering(762) 00:17:07.445 fused_ordering(763) 00:17:07.445 fused_ordering(764) 00:17:07.445 fused_ordering(765) 00:17:07.445 fused_ordering(766) 00:17:07.445 fused_ordering(767) 00:17:07.445 fused_ordering(768) 00:17:07.445 fused_ordering(769) 00:17:07.445 fused_ordering(770) 00:17:07.445 fused_ordering(771) 00:17:07.445 fused_ordering(772) 00:17:07.445 fused_ordering(773) 00:17:07.445 fused_ordering(774) 00:17:07.445 fused_ordering(775) 00:17:07.445 fused_ordering(776) 00:17:07.445 fused_ordering(777) 00:17:07.445 fused_ordering(778) 00:17:07.445 fused_ordering(779) 00:17:07.445 fused_ordering(780) 00:17:07.445 fused_ordering(781) 00:17:07.445 fused_ordering(782) 00:17:07.445 fused_ordering(783) 00:17:07.445 fused_ordering(784) 00:17:07.445 fused_ordering(785) 00:17:07.445 fused_ordering(786) 00:17:07.445 fused_ordering(787) 00:17:07.445 fused_ordering(788) 00:17:07.446 fused_ordering(789) 00:17:07.446 fused_ordering(790) 00:17:07.446 fused_ordering(791) 00:17:07.446 fused_ordering(792) 00:17:07.446 fused_ordering(793) 00:17:07.446 fused_ordering(794) 00:17:07.446 fused_ordering(795) 00:17:07.446 fused_ordering(796) 00:17:07.446 fused_ordering(797) 00:17:07.446 fused_ordering(798) 00:17:07.446 fused_ordering(799) 00:17:07.446 fused_ordering(800) 00:17:07.446 fused_ordering(801) 00:17:07.446 fused_ordering(802) 00:17:07.446 fused_ordering(803) 00:17:07.446 fused_ordering(804) 00:17:07.446 fused_ordering(805) 00:17:07.446 fused_ordering(806) 00:17:07.446 fused_ordering(807) 00:17:07.446 fused_ordering(808) 00:17:07.446 fused_ordering(809) 00:17:07.446 fused_ordering(810) 00:17:07.446 fused_ordering(811) 00:17:07.446 fused_ordering(812) 00:17:07.446 fused_ordering(813) 00:17:07.446 fused_ordering(814) 00:17:07.446 fused_ordering(815) 00:17:07.446 fused_ordering(816) 00:17:07.446 fused_ordering(817) 00:17:07.446 fused_ordering(818) 00:17:07.446 fused_ordering(819) 00:17:07.446 fused_ordering(820) 00:17:08.013 fused_ordering(821) 00:17:08.013 fused_ordering(822) 00:17:08.013 fused_ordering(823) 00:17:08.013 fused_ordering(824) 00:17:08.013 fused_ordering(825) 00:17:08.013 fused_ordering(826) 00:17:08.013 fused_ordering(827) 00:17:08.013 fused_ordering(828) 00:17:08.013 fused_ordering(829) 00:17:08.013 fused_ordering(830) 00:17:08.013 fused_ordering(831) 00:17:08.013 fused_ordering(832) 00:17:08.013 fused_ordering(833) 00:17:08.013 fused_ordering(834) 00:17:08.013 fused_ordering(835) 00:17:08.013 fused_ordering(836) 00:17:08.013 fused_ordering(837) 00:17:08.013 fused_ordering(838) 00:17:08.013 fused_ordering(839) 00:17:08.013 fused_ordering(840) 00:17:08.013 fused_ordering(841) 00:17:08.013 fused_ordering(842) 00:17:08.013 fused_ordering(843) 00:17:08.013 fused_ordering(844) 00:17:08.013 fused_ordering(845) 00:17:08.013 fused_ordering(846) 00:17:08.013 fused_ordering(847) 00:17:08.013 fused_ordering(848) 00:17:08.013 fused_ordering(849) 00:17:08.013 fused_ordering(850) 00:17:08.013 fused_ordering(851) 00:17:08.013 fused_ordering(852) 00:17:08.013 fused_ordering(853) 00:17:08.013 fused_ordering(854) 00:17:08.013 fused_ordering(855) 00:17:08.013 fused_ordering(856) 00:17:08.013 fused_ordering(857) 00:17:08.013 fused_ordering(858) 00:17:08.013 fused_ordering(859) 00:17:08.013 fused_ordering(860) 00:17:08.013 fused_ordering(861) 00:17:08.013 fused_ordering(862) 00:17:08.013 fused_ordering(863) 00:17:08.013 fused_ordering(864) 00:17:08.013 fused_ordering(865) 00:17:08.013 fused_ordering(866) 00:17:08.013 fused_ordering(867) 00:17:08.013 fused_ordering(868) 00:17:08.013 fused_ordering(869) 00:17:08.013 fused_ordering(870) 00:17:08.013 fused_ordering(871) 00:17:08.013 fused_ordering(872) 00:17:08.013 fused_ordering(873) 00:17:08.013 fused_ordering(874) 00:17:08.013 fused_ordering(875) 00:17:08.013 fused_ordering(876) 00:17:08.013 fused_ordering(877) 00:17:08.013 fused_ordering(878) 00:17:08.013 fused_ordering(879) 00:17:08.013 fused_ordering(880) 00:17:08.013 fused_ordering(881) 00:17:08.013 fused_ordering(882) 00:17:08.013 fused_ordering(883) 00:17:08.013 fused_ordering(884) 00:17:08.013 fused_ordering(885) 00:17:08.013 fused_ordering(886) 00:17:08.013 fused_ordering(887) 00:17:08.013 fused_ordering(888) 00:17:08.013 fused_ordering(889) 00:17:08.013 fused_ordering(890) 00:17:08.013 fused_ordering(891) 00:17:08.013 fused_ordering(892) 00:17:08.013 fused_ordering(893) 00:17:08.013 fused_ordering(894) 00:17:08.013 fused_ordering(895) 00:17:08.013 fused_ordering(896) 00:17:08.013 fused_ordering(897) 00:17:08.013 fused_ordering(898) 00:17:08.013 fused_ordering(899) 00:17:08.013 fused_ordering(900) 00:17:08.013 fused_ordering(901) 00:17:08.013 fused_ordering(902) 00:17:08.013 fused_ordering(903) 00:17:08.013 fused_ordering(904) 00:17:08.013 fused_ordering(905) 00:17:08.013 fused_ordering(906) 00:17:08.013 fused_ordering(907) 00:17:08.013 fused_ordering(908) 00:17:08.013 fused_ordering(909) 00:17:08.013 fused_ordering(910) 00:17:08.013 fused_ordering(911) 00:17:08.013 fused_ordering(912) 00:17:08.013 fused_ordering(913) 00:17:08.013 fused_ordering(914) 00:17:08.013 fused_ordering(915) 00:17:08.013 fused_ordering(916) 00:17:08.013 fused_ordering(917) 00:17:08.013 fused_ordering(918) 00:17:08.013 fused_ordering(919) 00:17:08.013 fused_ordering(920) 00:17:08.013 fused_ordering(921) 00:17:08.013 fused_ordering(922) 00:17:08.013 fused_ordering(923) 00:17:08.013 fused_ordering(924) 00:17:08.013 fused_ordering(925) 00:17:08.013 fused_ordering(926) 00:17:08.013 fused_ordering(927) 00:17:08.013 fused_ordering(928) 00:17:08.013 fused_ordering(929) 00:17:08.013 fused_ordering(930) 00:17:08.013 fused_ordering(931) 00:17:08.013 fused_ordering(932) 00:17:08.013 fused_ordering(933) 00:17:08.013 fused_ordering(934) 00:17:08.013 fused_ordering(935) 00:17:08.013 fused_ordering(936) 00:17:08.013 fused_ordering(937) 00:17:08.013 fused_ordering(938) 00:17:08.013 fused_ordering(939) 00:17:08.013 fused_ordering(940) 00:17:08.013 fused_ordering(941) 00:17:08.013 fused_ordering(942) 00:17:08.013 fused_ordering(943) 00:17:08.013 fused_ordering(944) 00:17:08.013 fused_ordering(945) 00:17:08.013 fused_ordering(946) 00:17:08.013 fused_ordering(947) 00:17:08.013 fused_ordering(948) 00:17:08.013 fused_ordering(949) 00:17:08.013 fused_ordering(950) 00:17:08.013 fused_ordering(951) 00:17:08.013 fused_ordering(952) 00:17:08.013 fused_ordering(953) 00:17:08.013 fused_ordering(954) 00:17:08.013 fused_ordering(955) 00:17:08.013 fused_ordering(956) 00:17:08.013 fused_ordering(957) 00:17:08.014 fused_ordering(958) 00:17:08.014 fused_ordering(959) 00:17:08.014 fused_ordering(960) 00:17:08.014 fused_ordering(961) 00:17:08.014 fused_ordering(962) 00:17:08.014 fused_ordering(963) 00:17:08.014 fused_ordering(964) 00:17:08.014 fused_ordering(965) 00:17:08.014 fused_ordering(966) 00:17:08.014 fused_ordering(967) 00:17:08.014 fused_ordering(968) 00:17:08.014 fused_ordering(969) 00:17:08.014 fused_ordering(970) 00:17:08.014 fused_ordering(971) 00:17:08.014 fused_ordering(972) 00:17:08.014 fused_ordering(973) 00:17:08.014 fused_ordering(974) 00:17:08.014 fused_ordering(975) 00:17:08.014 fused_ordering(976) 00:17:08.014 fused_ordering(977) 00:17:08.014 fused_ordering(978) 00:17:08.014 fused_ordering(979) 00:17:08.014 fused_ordering(980) 00:17:08.014 fused_ordering(981) 00:17:08.014 fused_ordering(982) 00:17:08.014 fused_ordering(983) 00:17:08.014 fused_ordering(984) 00:17:08.014 fused_ordering(985) 00:17:08.014 fused_ordering(986) 00:17:08.014 fused_ordering(987) 00:17:08.014 fused_ordering(988) 00:17:08.014 fused_ordering(989) 00:17:08.014 fused_ordering(990) 00:17:08.014 fused_ordering(991) 00:17:08.014 fused_ordering(992) 00:17:08.014 fused_ordering(993) 00:17:08.014 fused_ordering(994) 00:17:08.014 fused_ordering(995) 00:17:08.014 fused_ordering(996) 00:17:08.014 fused_ordering(997) 00:17:08.014 fused_ordering(998) 00:17:08.014 fused_ordering(999) 00:17:08.014 fused_ordering(1000) 00:17:08.014 fused_ordering(1001) 00:17:08.014 fused_ordering(1002) 00:17:08.014 fused_ordering(1003) 00:17:08.014 fused_ordering(1004) 00:17:08.014 fused_ordering(1005) 00:17:08.014 fused_ordering(1006) 00:17:08.014 fused_ordering(1007) 00:17:08.014 fused_ordering(1008) 00:17:08.014 fused_ordering(1009) 00:17:08.014 fused_ordering(1010) 00:17:08.014 fused_ordering(1011) 00:17:08.014 fused_ordering(1012) 00:17:08.014 fused_ordering(1013) 00:17:08.014 fused_ordering(1014) 00:17:08.014 fused_ordering(1015) 00:17:08.014 fused_ordering(1016) 00:17:08.014 fused_ordering(1017) 00:17:08.014 fused_ordering(1018) 00:17:08.014 fused_ordering(1019) 00:17:08.014 fused_ordering(1020) 00:17:08.014 fused_ordering(1021) 00:17:08.014 fused_ordering(1022) 00:17:08.014 fused_ordering(1023) 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:08.014 rmmod nvme_tcp 00:17:08.014 rmmod nvme_fabrics 00:17:08.014 rmmod nvme_keyring 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 2022878 ']' 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 2022878 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2022878 ']' 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2022878 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2022878 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2022878' 00:17:08.014 killing process with pid 2022878 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2022878 00:17:08.014 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2022878 00:17:08.272 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:08.272 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:08.272 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:08.272 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:08.272 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:17:08.272 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:08.272 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:17:08.272 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:08.272 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:08.272 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.272 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.272 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.182 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:10.182 00:17:10.182 real 0m10.053s 00:17:10.182 user 0m4.794s 00:17:10.182 sys 0m5.413s 00:17:10.182 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:10.182 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:10.182 ************************************ 00:17:10.182 END TEST nvmf_fused_ordering 00:17:10.182 ************************************ 00:17:10.182 11:12:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:10.182 11:12:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:10.182 11:12:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:10.182 11:12:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:10.443 ************************************ 00:17:10.443 START TEST nvmf_ns_masking 00:17:10.443 ************************************ 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:10.443 * Looking for test storage... 00:17:10.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:10.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.443 --rc genhtml_branch_coverage=1 00:17:10.443 --rc genhtml_function_coverage=1 00:17:10.443 --rc genhtml_legend=1 00:17:10.443 --rc geninfo_all_blocks=1 00:17:10.443 --rc geninfo_unexecuted_blocks=1 00:17:10.443 00:17:10.443 ' 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:10.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.443 --rc genhtml_branch_coverage=1 00:17:10.443 --rc genhtml_function_coverage=1 00:17:10.443 --rc genhtml_legend=1 00:17:10.443 --rc geninfo_all_blocks=1 00:17:10.443 --rc geninfo_unexecuted_blocks=1 00:17:10.443 00:17:10.443 ' 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:10.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.443 --rc genhtml_branch_coverage=1 00:17:10.443 --rc genhtml_function_coverage=1 00:17:10.443 --rc genhtml_legend=1 00:17:10.443 --rc geninfo_all_blocks=1 00:17:10.443 --rc geninfo_unexecuted_blocks=1 00:17:10.443 00:17:10.443 ' 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:10.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.443 --rc genhtml_branch_coverage=1 00:17:10.443 --rc genhtml_function_coverage=1 00:17:10.443 --rc genhtml_legend=1 00:17:10.443 --rc geninfo_all_blocks=1 00:17:10.443 --rc geninfo_unexecuted_blocks=1 00:17:10.443 00:17:10.443 ' 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.443 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:10.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=66ab9f01-697b-462e-b5f9-3058de9c7bcd 00:17:10.444 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:10.444 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=aee6674c-aa48-478f-befa-240e23f387c0 00:17:10.444 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:10.444 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:10.444 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:10.444 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:10.444 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=21e610ba-9d14-4c0a-8bc4-13de2f225c44 00:17:10.444 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:10.444 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:10.444 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.444 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:10.444 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:10.444 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:10.444 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.444 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.444 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.704 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:10.704 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:10.704 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:10.704 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:15.992 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:15.992 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:15.992 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:15.992 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:15.992 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:15.992 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:15.993 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:15.993 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:15.993 Found net devices under 0000:af:00.0: cvl_0_0 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:15.993 Found net devices under 0000:af:00.1: cvl_0_1 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:15.993 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.251 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.251 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.251 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:16.251 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.251 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.251 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.251 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:16.251 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:16.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:17:16.251 00:17:16.251 --- 10.0.0.2 ping statistics --- 00:17:16.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.251 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:17:16.251 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:17:16.251 00:17:16.252 --- 10.0.0.1 ping statistics --- 00:17:16.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.252 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=2027092 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 2027092 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2027092 ']' 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:16.252 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:16.510 [2024-10-06 11:12:13.832492] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:17:16.510 [2024-10-06 11:12:13.832538] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.510 [2024-10-06 11:12:13.887372] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.510 [2024-10-06 11:12:13.924928] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.510 [2024-10-06 11:12:13.924972] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.510 [2024-10-06 11:12:13.924979] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.510 [2024-10-06 11:12:13.924984] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.510 [2024-10-06 11:12:13.924990] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.510 [2024-10-06 11:12:13.925525] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.510 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:16.510 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:16.510 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:16.510 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:16.510 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:16.510 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.510 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:16.769 [2024-10-06 11:12:14.226637] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.769 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:16.769 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:16.769 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:17.027 Malloc1 00:17:17.027 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:17.286 Malloc2 00:17:17.286 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:17.286 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:17.545 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:17.804 [2024-10-06 11:12:15.201217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.804 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:17.804 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 21e610ba-9d14-4c0a-8bc4-13de2f225c44 -a 10.0.0.2 -s 4420 -i 4 00:17:18.062 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:18.062 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:18.062 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:18.062 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:18.062 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:19.966 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:19.966 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:19.966 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:19.966 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:19.966 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:19.966 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:19.966 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:19.966 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:19.966 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:19.966 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:19.966 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:19.966 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:19.966 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:19.966 [ 0]:0x1 00:17:19.966 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:19.966 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:19.966 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=47a3f784792b42339644dabbcfb63068 00:17:19.966 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 47a3f784792b42339644dabbcfb63068 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:19.966 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:20.226 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:20.226 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.226 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:20.226 [ 0]:0x1 00:17:20.226 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:20.226 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.226 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=47a3f784792b42339644dabbcfb63068 00:17:20.226 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 47a3f784792b42339644dabbcfb63068 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.226 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:20.226 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.226 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:20.485 [ 1]:0x2 00:17:20.485 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:20.485 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.485 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf05670a65fe44a387064afd9c55a037 00:17:20.485 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf05670a65fe44a387064afd9c55a037 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.485 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:20.485 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:20.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.744 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:20.744 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:21.001 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:21.001 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 21e610ba-9d14-4c0a-8bc4-13de2f225c44 -a 10.0.0.2 -s 4420 -i 4 00:17:21.259 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:21.259 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:21.259 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:21.259 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:21.259 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:21.259 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:23.298 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:23.298 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:23.298 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:23.298 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:23.298 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:23.298 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:23.298 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:23.298 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:23.298 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:23.298 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:23.298 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:23.298 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:23.298 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:23.298 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:23.298 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:23.298 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:23.299 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:23.299 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:23.299 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.299 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:23.299 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.299 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:23.299 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:23.299 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.299 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:23.299 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:23.299 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:23.299 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:23.299 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:23.299 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.299 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:23.557 [ 0]:0x2 00:17:23.557 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:23.557 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.557 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf05670a65fe44a387064afd9c55a037 00:17:23.557 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf05670a65fe44a387064afd9c55a037 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.557 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:23.557 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:23.557 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:23.557 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.557 [ 0]:0x1 00:17:23.557 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:23.557 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.817 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=47a3f784792b42339644dabbcfb63068 00:17:23.817 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 47a3f784792b42339644dabbcfb63068 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.817 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:23.817 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:23.817 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.817 [ 1]:0x2 00:17:23.817 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:23.817 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.817 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf05670a65fe44a387064afd9c55a037 00:17:23.817 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf05670a65fe44a387064afd9c55a037 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.817 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:24.076 [ 0]:0x2 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:24.076 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf05670a65fe44a387064afd9c55a037 00:17:24.077 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf05670a65fe44a387064afd9c55a037 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:24.077 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:24.077 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:24.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.077 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:24.335 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:24.335 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 21e610ba-9d14-4c0a-8bc4-13de2f225c44 -a 10.0.0.2 -s 4420 -i 4 00:17:24.595 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:24.595 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:24.595 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:24.595 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:24.595 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:24.595 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:26.500 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:26.500 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:26.500 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:26.500 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:26.500 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:26.500 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:26.500 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:26.500 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:26.500 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:26.500 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:26.500 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:26.500 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:26.500 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:26.500 [ 0]:0x1 00:17:26.500 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:26.500 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:26.760 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=47a3f784792b42339644dabbcfb63068 00:17:26.760 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 47a3f784792b42339644dabbcfb63068 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:26.760 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:26.760 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:26.760 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:26.760 [ 1]:0x2 00:17:26.760 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:26.760 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:26.760 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf05670a65fe44a387064afd9c55a037 00:17:26.760 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf05670a65fe44a387064afd9c55a037 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:26.760 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:26.760 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:26.760 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:26.760 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:26.760 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:27.020 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:27.020 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:27.020 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:27.020 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:27.020 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:27.020 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:27.020 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:27.020 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:27.020 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:27.020 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:27.020 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:27.020 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:27.020 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:27.020 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:27.020 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:27.021 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:27.021 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:27.021 [ 0]:0x2 00:17:27.021 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:27.021 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:27.021 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf05670a65fe44a387064afd9c55a037 00:17:27.021 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf05670a65fe44a387064afd9c55a037 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:27.021 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:27.021 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:27.021 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:27.021 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.021 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:27.021 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.021 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:27.021 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.021 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:27.021 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.021 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:27.021 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:27.281 [2024-10-06 11:12:24.615743] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:27.281 request: 00:17:27.281 { 00:17:27.281 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:27.281 "nsid": 2, 00:17:27.281 "host": "nqn.2016-06.io.spdk:host1", 00:17:27.281 "method": "nvmf_ns_remove_host", 00:17:27.281 "req_id": 1 00:17:27.281 } 00:17:27.281 Got JSON-RPC error response 00:17:27.281 response: 00:17:27.281 { 00:17:27.281 "code": -32602, 00:17:27.281 "message": "Invalid parameters" 00:17:27.281 } 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:27.281 [ 0]:0x2 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf05670a65fe44a387064afd9c55a037 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf05670a65fe44a387064afd9c55a037 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:27.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2029045 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2029045 /var/tmp/host.sock 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2029045 ']' 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:27.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:27.281 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:27.281 [2024-10-06 11:12:24.842423] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:17:27.281 [2024-10-06 11:12:24.842466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2029045 ] 00:17:27.542 [2024-10-06 11:12:24.898033] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.542 [2024-10-06 11:12:24.937067] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.801 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:27.801 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:27.801 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:27.801 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:28.061 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 66ab9f01-697b-462e-b5f9-3058de9c7bcd 00:17:28.061 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:17:28.061 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 66AB9F01697B462EB5F93058DE9C7BCD -i 00:17:28.320 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid aee6674c-aa48-478f-befa-240e23f387c0 00:17:28.320 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:17:28.320 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g AEE6674CAA48478FBEFA240E23F387C0 -i 00:17:28.320 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:28.579 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:28.838 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:28.838 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:29.097 nvme0n1 00:17:29.097 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:29.097 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:29.665 nvme1n2 00:17:29.665 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:29.665 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:29.665 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:29.665 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:29.665 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:29.665 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:29.665 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:29.665 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:29.665 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:29.924 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 66ab9f01-697b-462e-b5f9-3058de9c7bcd == \6\6\a\b\9\f\0\1\-\6\9\7\b\-\4\6\2\e\-\b\5\f\9\-\3\0\5\8\d\e\9\c\7\b\c\d ]] 00:17:29.924 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:29.924 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:29.924 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:30.184 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ aee6674c-aa48-478f-befa-240e23f387c0 == \a\e\e\6\6\7\4\c\-\a\a\4\8\-\4\7\8\f\-\b\e\f\a\-\2\4\0\e\2\3\f\3\8\7\c\0 ]] 00:17:30.184 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2029045 00:17:30.184 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2029045 ']' 00:17:30.184 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2029045 00:17:30.184 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:30.184 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:30.184 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2029045 00:17:30.184 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:30.184 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:30.184 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2029045' 00:17:30.184 killing process with pid 2029045 00:17:30.184 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2029045 00:17:30.184 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2029045 00:17:30.444 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:30.703 rmmod nvme_tcp 00:17:30.703 rmmod nvme_fabrics 00:17:30.703 rmmod nvme_keyring 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 2027092 ']' 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 2027092 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2027092 ']' 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2027092 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2027092 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2027092' 00:17:30.703 killing process with pid 2027092 00:17:30.703 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2027092 00:17:30.704 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2027092 00:17:30.967 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:30.967 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:30.967 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:30.967 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:30.967 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:17:30.967 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:30.967 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:17:30.967 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:30.967 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:30.967 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.967 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.967 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:33.505 00:17:33.505 real 0m22.738s 00:17:33.505 user 0m23.948s 00:17:33.505 sys 0m6.585s 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:33.505 ************************************ 00:17:33.505 END TEST nvmf_ns_masking 00:17:33.505 ************************************ 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:33.505 ************************************ 00:17:33.505 START TEST nvmf_nvme_cli 00:17:33.505 ************************************ 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:33.505 * Looking for test storage... 00:17:33.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:33.505 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:33.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.506 --rc genhtml_branch_coverage=1 00:17:33.506 --rc genhtml_function_coverage=1 00:17:33.506 --rc genhtml_legend=1 00:17:33.506 --rc geninfo_all_blocks=1 00:17:33.506 --rc geninfo_unexecuted_blocks=1 00:17:33.506 00:17:33.506 ' 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:33.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.506 --rc genhtml_branch_coverage=1 00:17:33.506 --rc genhtml_function_coverage=1 00:17:33.506 --rc genhtml_legend=1 00:17:33.506 --rc geninfo_all_blocks=1 00:17:33.506 --rc geninfo_unexecuted_blocks=1 00:17:33.506 00:17:33.506 ' 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:33.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.506 --rc genhtml_branch_coverage=1 00:17:33.506 --rc genhtml_function_coverage=1 00:17:33.506 --rc genhtml_legend=1 00:17:33.506 --rc geninfo_all_blocks=1 00:17:33.506 --rc geninfo_unexecuted_blocks=1 00:17:33.506 00:17:33.506 ' 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:33.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.506 --rc genhtml_branch_coverage=1 00:17:33.506 --rc genhtml_function_coverage=1 00:17:33.506 --rc genhtml_legend=1 00:17:33.506 --rc geninfo_all_blocks=1 00:17:33.506 --rc geninfo_unexecuted_blocks=1 00:17:33.506 00:17:33.506 ' 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.506 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:33.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:33.507 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:38.784 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:38.784 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:38.784 Found net devices under 0000:af:00.0: cvl_0_0 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:38.784 Found net devices under 0000:af:00.1: cvl_0_1 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.784 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.785 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:38.785 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:39.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:17:39.044 00:17:39.044 --- 10.0.0.2 ping statistics --- 00:17:39.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.044 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:39.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:17:39.044 00:17:39.044 --- 10.0.0.1 ping statistics --- 00:17:39.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.044 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=2033198 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 2033198 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2033198 ']' 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:39.044 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.044 [2024-10-06 11:12:36.499628] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:17:39.045 [2024-10-06 11:12:36.499671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.045 [2024-10-06 11:12:36.560947] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:39.045 [2024-10-06 11:12:36.601116] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.045 [2024-10-06 11:12:36.601159] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.045 [2024-10-06 11:12:36.601166] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.045 [2024-10-06 11:12:36.601173] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.045 [2024-10-06 11:12:36.601178] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.045 [2024-10-06 11:12:36.602680] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.045 [2024-10-06 11:12:36.602699] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.045 [2024-10-06 11:12:36.602793] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:17:39.045 [2024-10-06 11:12:36.602794] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.304 [2024-10-06 11:12:36.760378] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.304 Malloc0 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.304 Malloc1 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.304 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.305 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.305 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:39.305 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.305 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.305 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.305 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:39.305 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.305 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.305 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.305 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.305 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.305 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.305 [2024-10-06 11:12:36.844894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.305 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.305 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:39.305 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.305 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.305 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.305 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:17:39.564 00:17:39.564 Discovery Log Number of Records 2, Generation counter 2 00:17:39.564 =====Discovery Log Entry 0====== 00:17:39.564 trtype: tcp 00:17:39.564 adrfam: ipv4 00:17:39.564 subtype: current discovery subsystem 00:17:39.564 treq: not required 00:17:39.564 portid: 0 00:17:39.564 trsvcid: 4420 00:17:39.564 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:39.564 traddr: 10.0.0.2 00:17:39.564 eflags: explicit discovery connections, duplicate discovery information 00:17:39.564 sectype: none 00:17:39.564 =====Discovery Log Entry 1====== 00:17:39.564 trtype: tcp 00:17:39.564 adrfam: ipv4 00:17:39.564 subtype: nvme subsystem 00:17:39.564 treq: not required 00:17:39.564 portid: 0 00:17:39.564 trsvcid: 4420 00:17:39.564 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:39.564 traddr: 10.0.0.2 00:17:39.564 eflags: none 00:17:39.564 sectype: none 00:17:39.564 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:39.564 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:39.564 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:17:39.564 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:39.564 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:17:39.564 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:17:39.564 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:39.564 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:17:39.564 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:39.564 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:39.564 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:40.951 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:40.951 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:17:40.951 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:40.951 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:40.951 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:40.951 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:42.857 /dev/nvme0n2 ]] 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:42.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:42.857 rmmod nvme_tcp 00:17:42.857 rmmod nvme_fabrics 00:17:42.857 rmmod nvme_keyring 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 2033198 ']' 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 2033198 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2033198 ']' 00:17:42.857 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2033198 00:17:43.117 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:17:43.117 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.117 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2033198 00:17:43.117 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:43.117 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:43.117 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2033198' 00:17:43.117 killing process with pid 2033198 00:17:43.117 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2033198 00:17:43.117 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2033198 00:17:43.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:43.377 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:43.377 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:43.377 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:43.377 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:17:43.377 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:17:43.377 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:43.377 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:43.377 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:43.377 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.377 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.377 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.282 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:45.282 00:17:45.282 real 0m12.215s 00:17:45.282 user 0m17.864s 00:17:45.282 sys 0m4.907s 00:17:45.282 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:45.282 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:45.282 ************************************ 00:17:45.282 END TEST nvmf_nvme_cli 00:17:45.282 ************************************ 00:17:45.282 11:12:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:45.282 11:12:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:45.282 11:12:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:45.283 11:12:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:45.283 11:12:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:45.543 ************************************ 00:17:45.543 START TEST nvmf_vfio_user 00:17:45.543 ************************************ 00:17:45.543 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:45.543 * Looking for test storage... 00:17:45.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.543 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:45.543 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:17:45.543 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:45.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.543 --rc genhtml_branch_coverage=1 00:17:45.543 --rc genhtml_function_coverage=1 00:17:45.543 --rc genhtml_legend=1 00:17:45.543 --rc geninfo_all_blocks=1 00:17:45.543 --rc geninfo_unexecuted_blocks=1 00:17:45.543 00:17:45.543 ' 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:45.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.543 --rc genhtml_branch_coverage=1 00:17:45.543 --rc genhtml_function_coverage=1 00:17:45.543 --rc genhtml_legend=1 00:17:45.543 --rc geninfo_all_blocks=1 00:17:45.543 --rc geninfo_unexecuted_blocks=1 00:17:45.543 00:17:45.543 ' 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:45.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.543 --rc genhtml_branch_coverage=1 00:17:45.543 --rc genhtml_function_coverage=1 00:17:45.543 --rc genhtml_legend=1 00:17:45.543 --rc geninfo_all_blocks=1 00:17:45.543 --rc geninfo_unexecuted_blocks=1 00:17:45.543 00:17:45.543 ' 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:45.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.543 --rc genhtml_branch_coverage=1 00:17:45.543 --rc genhtml_function_coverage=1 00:17:45.543 --rc genhtml_legend=1 00:17:45.543 --rc geninfo_all_blocks=1 00:17:45.543 --rc geninfo_unexecuted_blocks=1 00:17:45.543 00:17:45.543 ' 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.543 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:45.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2034358 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2034358' 00:17:45.544 Process pid: 2034358 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2034358 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2034358 ']' 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:45.544 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:45.544 [2024-10-06 11:12:43.089833] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:17:45.544 [2024-10-06 11:12:43.089885] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.803 [2024-10-06 11:12:43.145147] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:45.803 [2024-10-06 11:12:43.184694] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.803 [2024-10-06 11:12:43.184735] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.803 [2024-10-06 11:12:43.184742] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:45.803 [2024-10-06 11:12:43.184748] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:45.803 [2024-10-06 11:12:43.184753] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.803 [2024-10-06 11:12:43.186252] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.803 [2024-10-06 11:12:43.186352] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.803 [2024-10-06 11:12:43.186442] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:17:45.803 [2024-10-06 11:12:43.186443] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.803 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:45.803 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:45.803 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:46.739 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:46.998 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:46.998 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:46.998 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:46.998 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:46.998 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:47.257 Malloc1 00:17:47.257 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:47.515 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:47.774 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:47.774 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:47.774 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:47.774 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:48.033 Malloc2 00:17:48.033 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:48.292 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:48.551 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:48.551 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:48.551 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:48.813 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:48.813 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:48.813 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:48.813 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:48.813 [2024-10-06 11:12:46.152968] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:17:48.813 [2024-10-06 11:12:46.153013] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2034929 ] 00:17:48.813 [2024-10-06 11:12:46.179222] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:48.813 [2024-10-06 11:12:46.184930] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:48.813 [2024-10-06 11:12:46.184947] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f74e6481000 00:17:48.813 [2024-10-06 11:12:46.185931] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:48.813 [2024-10-06 11:12:46.186929] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:48.813 [2024-10-06 11:12:46.187937] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:48.813 [2024-10-06 11:12:46.188959] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:48.813 [2024-10-06 11:12:46.189943] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:48.813 [2024-10-06 11:12:46.190957] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:48.813 [2024-10-06 11:12:46.191961] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:48.813 [2024-10-06 11:12:46.192967] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:48.813 [2024-10-06 11:12:46.193976] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:48.813 [2024-10-06 11:12:46.193989] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f74e518a000 00:17:48.813 [2024-10-06 11:12:46.194896] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:48.813 [2024-10-06 11:12:46.204287] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:48.813 [2024-10-06 11:12:46.204316] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:17:48.813 [2024-10-06 11:12:46.210081] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:48.813 [2024-10-06 11:12:46.210114] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:48.813 [2024-10-06 11:12:46.210184] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:17:48.813 [2024-10-06 11:12:46.210200] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:17:48.813 [2024-10-06 11:12:46.210205] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:17:48.813 [2024-10-06 11:12:46.211073] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:48.813 [2024-10-06 11:12:46.211081] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:17:48.813 [2024-10-06 11:12:46.211087] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:17:48.813 [2024-10-06 11:12:46.212077] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:48.813 [2024-10-06 11:12:46.212084] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:17:48.813 [2024-10-06 11:12:46.212090] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:17:48.813 [2024-10-06 11:12:46.213086] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:48.813 [2024-10-06 11:12:46.213094] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:48.813 [2024-10-06 11:12:46.214093] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:48.813 [2024-10-06 11:12:46.214100] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:17:48.813 [2024-10-06 11:12:46.214104] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:17:48.813 [2024-10-06 11:12:46.214110] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:48.813 [2024-10-06 11:12:46.214215] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:17:48.813 [2024-10-06 11:12:46.214219] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:48.813 [2024-10-06 11:12:46.214223] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:48.813 [2024-10-06 11:12:46.215105] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:48.813 [2024-10-06 11:12:46.216108] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:48.813 [2024-10-06 11:12:46.217117] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:48.813 [2024-10-06 11:12:46.218119] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:48.813 [2024-10-06 11:12:46.218222] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:48.813 [2024-10-06 11:12:46.219130] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:48.813 [2024-10-06 11:12:46.219137] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:48.813 [2024-10-06 11:12:46.219141] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:17:48.813 [2024-10-06 11:12:46.219157] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:17:48.813 [2024-10-06 11:12:46.219170] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:17:48.813 [2024-10-06 11:12:46.219183] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:48.813 [2024-10-06 11:12:46.219187] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:48.813 [2024-10-06 11:12:46.219191] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:48.813 [2024-10-06 11:12:46.219203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:48.813 [2024-10-06 11:12:46.219252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:48.813 [2024-10-06 11:12:46.219260] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:17:48.813 [2024-10-06 11:12:46.219265] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:17:48.813 [2024-10-06 11:12:46.219269] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:17:48.813 [2024-10-06 11:12:46.219272] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:48.813 [2024-10-06 11:12:46.219276] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:17:48.813 [2024-10-06 11:12:46.219280] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:17:48.813 [2024-10-06 11:12:46.219284] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:17:48.813 [2024-10-06 11:12:46.219291] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:17:48.813 [2024-10-06 11:12:46.219300] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:48.813 [2024-10-06 11:12:46.219318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:48.813 [2024-10-06 11:12:46.219328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.813 [2024-10-06 11:12:46.219339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.813 [2024-10-06 11:12:46.219347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.813 [2024-10-06 11:12:46.219355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.813 [2024-10-06 11:12:46.219359] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:48.813 [2024-10-06 11:12:46.219366] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:48.813 [2024-10-06 11:12:46.219374] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:48.813 [2024-10-06 11:12:46.219383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:48.814 [2024-10-06 11:12:46.219389] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:17:48.814 [2024-10-06 11:12:46.219393] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:48.814 [2024-10-06 11:12:46.219399] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:17:48.814 [2024-10-06 11:12:46.219406] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:48.814 [2024-10-06 11:12:46.219414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:48.814 [2024-10-06 11:12:46.219425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:48.814 [2024-10-06 11:12:46.219473] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:17:48.814 [2024-10-06 11:12:46.219480] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:48.814 [2024-10-06 11:12:46.219486] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:48.814 [2024-10-06 11:12:46.219490] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:48.814 [2024-10-06 11:12:46.219493] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:48.814 [2024-10-06 11:12:46.219499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:48.814 [2024-10-06 11:12:46.219511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:48.814 [2024-10-06 11:12:46.219520] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:17:48.814 [2024-10-06 11:12:46.219528] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:17:48.814 [2024-10-06 11:12:46.219534] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:17:48.814 [2024-10-06 11:12:46.219540] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:48.814 [2024-10-06 11:12:46.219544] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:48.814 [2024-10-06 11:12:46.219547] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:48.814 [2024-10-06 11:12:46.219553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:48.814 [2024-10-06 11:12:46.219574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:48.814 [2024-10-06 11:12:46.219585] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:48.814 [2024-10-06 11:12:46.219592] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:48.814 [2024-10-06 11:12:46.219598] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:48.814 [2024-10-06 11:12:46.219602] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:48.814 [2024-10-06 11:12:46.219605] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:48.814 [2024-10-06 11:12:46.219610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:48.814 [2024-10-06 11:12:46.219619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:48.814 [2024-10-06 11:12:46.219626] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:48.814 [2024-10-06 11:12:46.219632] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:17:48.814 [2024-10-06 11:12:46.219639] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:17:48.814 [2024-10-06 11:12:46.219643] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:48.814 [2024-10-06 11:12:46.219648] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:48.814 [2024-10-06 11:12:46.219653] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:17:48.814 [2024-10-06 11:12:46.219657] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:17:48.814 [2024-10-06 11:12:46.219661] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:17:48.814 [2024-10-06 11:12:46.219665] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:17:48.814 [2024-10-06 11:12:46.219682] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:48.814 [2024-10-06 11:12:46.219693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:48.814 [2024-10-06 11:12:46.219703] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:48.814 [2024-10-06 11:12:46.219711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:48.814 [2024-10-06 11:12:46.219720] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:48.814 [2024-10-06 11:12:46.219734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:48.814 [2024-10-06 11:12:46.219744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:48.814 [2024-10-06 11:12:46.219755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:48.814 [2024-10-06 11:12:46.219767] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:48.814 [2024-10-06 11:12:46.219771] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:48.814 [2024-10-06 11:12:46.219774] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:48.814 [2024-10-06 11:12:46.219777] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:48.814 [2024-10-06 11:12:46.219780] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:48.814 [2024-10-06 11:12:46.219785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:48.814 [2024-10-06 11:12:46.219791] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:48.814 [2024-10-06 11:12:46.219795] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:48.814 [2024-10-06 11:12:46.219797] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:48.814 [2024-10-06 11:12:46.219802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:48.814 [2024-10-06 11:12:46.219809] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:48.814 [2024-10-06 11:12:46.219812] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:48.814 [2024-10-06 11:12:46.219815] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:48.814 [2024-10-06 11:12:46.219820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:48.814 [2024-10-06 11:12:46.219827] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:48.814 [2024-10-06 11:12:46.219830] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:48.814 [2024-10-06 11:12:46.219833] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:48.814 [2024-10-06 11:12:46.219838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:48.814 [2024-10-06 11:12:46.219844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:48.814 [2024-10-06 11:12:46.219855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:48.814 [2024-10-06 11:12:46.219864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:48.814 [2024-10-06 11:12:46.219870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:48.814 ===================================================== 00:17:48.814 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:48.814 ===================================================== 00:17:48.814 Controller Capabilities/Features 00:17:48.814 ================================ 00:17:48.814 Vendor ID: 4e58 00:17:48.814 Subsystem Vendor ID: 4e58 00:17:48.814 Serial Number: SPDK1 00:17:48.814 Model Number: SPDK bdev Controller 00:17:48.814 Firmware Version: 25.01 00:17:48.814 Recommended Arb Burst: 6 00:17:48.814 IEEE OUI Identifier: 8d 6b 50 00:17:48.814 Multi-path I/O 00:17:48.814 May have multiple subsystem ports: Yes 00:17:48.814 May have multiple controllers: Yes 00:17:48.814 Associated with SR-IOV VF: No 00:17:48.814 Max Data Transfer Size: 131072 00:17:48.814 Max Number of Namespaces: 32 00:17:48.814 Max Number of I/O Queues: 127 00:17:48.814 NVMe Specification Version (VS): 1.3 00:17:48.814 NVMe Specification Version (Identify): 1.3 00:17:48.814 Maximum Queue Entries: 256 00:17:48.814 Contiguous Queues Required: Yes 00:17:48.814 Arbitration Mechanisms Supported 00:17:48.814 Weighted Round Robin: Not Supported 00:17:48.814 Vendor Specific: Not Supported 00:17:48.814 Reset Timeout: 15000 ms 00:17:48.814 Doorbell Stride: 4 bytes 00:17:48.814 NVM Subsystem Reset: Not Supported 00:17:48.814 Command Sets Supported 00:17:48.814 NVM Command Set: Supported 00:17:48.814 Boot Partition: Not Supported 00:17:48.814 Memory Page Size Minimum: 4096 bytes 00:17:48.814 Memory Page Size Maximum: 4096 bytes 00:17:48.814 Persistent Memory Region: Not Supported 00:17:48.814 Optional Asynchronous Events Supported 00:17:48.814 Namespace Attribute Notices: Supported 00:17:48.814 Firmware Activation Notices: Not Supported 00:17:48.815 ANA Change Notices: Not Supported 00:17:48.815 PLE Aggregate Log Change Notices: Not Supported 00:17:48.815 LBA Status Info Alert Notices: Not Supported 00:17:48.815 EGE Aggregate Log Change Notices: Not Supported 00:17:48.815 Normal NVM Subsystem Shutdown event: Not Supported 00:17:48.815 Zone Descriptor Change Notices: Not Supported 00:17:48.815 Discovery Log Change Notices: Not Supported 00:17:48.815 Controller Attributes 00:17:48.815 128-bit Host Identifier: Supported 00:17:48.815 Non-Operational Permissive Mode: Not Supported 00:17:48.815 NVM Sets: Not Supported 00:17:48.815 Read Recovery Levels: Not Supported 00:17:48.815 Endurance Groups: Not Supported 00:17:48.815 Predictable Latency Mode: Not Supported 00:17:48.815 Traffic Based Keep ALive: Not Supported 00:17:48.815 Namespace Granularity: Not Supported 00:17:48.815 SQ Associations: Not Supported 00:17:48.815 UUID List: Not Supported 00:17:48.815 Multi-Domain Subsystem: Not Supported 00:17:48.815 Fixed Capacity Management: Not Supported 00:17:48.815 Variable Capacity Management: Not Supported 00:17:48.815 Delete Endurance Group: Not Supported 00:17:48.815 Delete NVM Set: Not Supported 00:17:48.815 Extended LBA Formats Supported: Not Supported 00:17:48.815 Flexible Data Placement Supported: Not Supported 00:17:48.815 00:17:48.815 Controller Memory Buffer Support 00:17:48.815 ================================ 00:17:48.815 Supported: No 00:17:48.815 00:17:48.815 Persistent Memory Region Support 00:17:48.815 ================================ 00:17:48.815 Supported: No 00:17:48.815 00:17:48.815 Admin Command Set Attributes 00:17:48.815 ============================ 00:17:48.815 Security Send/Receive: Not Supported 00:17:48.815 Format NVM: Not Supported 00:17:48.815 Firmware Activate/Download: Not Supported 00:17:48.815 Namespace Management: Not Supported 00:17:48.815 Device Self-Test: Not Supported 00:17:48.815 Directives: Not Supported 00:17:48.815 NVMe-MI: Not Supported 00:17:48.815 Virtualization Management: Not Supported 00:17:48.815 Doorbell Buffer Config: Not Supported 00:17:48.815 Get LBA Status Capability: Not Supported 00:17:48.815 Command & Feature Lockdown Capability: Not Supported 00:17:48.815 Abort Command Limit: 4 00:17:48.815 Async Event Request Limit: 4 00:17:48.815 Number of Firmware Slots: N/A 00:17:48.815 Firmware Slot 1 Read-Only: N/A 00:17:48.815 Firmware Activation Without Reset: N/A 00:17:48.815 Multiple Update Detection Support: N/A 00:17:48.815 Firmware Update Granularity: No Information Provided 00:17:48.815 Per-Namespace SMART Log: No 00:17:48.815 Asymmetric Namespace Access Log Page: Not Supported 00:17:48.815 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:48.815 Command Effects Log Page: Supported 00:17:48.815 Get Log Page Extended Data: Supported 00:17:48.815 Telemetry Log Pages: Not Supported 00:17:48.815 Persistent Event Log Pages: Not Supported 00:17:48.815 Supported Log Pages Log Page: May Support 00:17:48.815 Commands Supported & Effects Log Page: Not Supported 00:17:48.815 Feature Identifiers & Effects Log Page:May Support 00:17:48.815 NVMe-MI Commands & Effects Log Page: May Support 00:17:48.815 Data Area 4 for Telemetry Log: Not Supported 00:17:48.815 Error Log Page Entries Supported: 128 00:17:48.815 Keep Alive: Supported 00:17:48.815 Keep Alive Granularity: 10000 ms 00:17:48.815 00:17:48.815 NVM Command Set Attributes 00:17:48.815 ========================== 00:17:48.815 Submission Queue Entry Size 00:17:48.815 Max: 64 00:17:48.815 Min: 64 00:17:48.815 Completion Queue Entry Size 00:17:48.815 Max: 16 00:17:48.815 Min: 16 00:17:48.815 Number of Namespaces: 32 00:17:48.815 Compare Command: Supported 00:17:48.815 Write Uncorrectable Command: Not Supported 00:17:48.815 Dataset Management Command: Supported 00:17:48.815 Write Zeroes Command: Supported 00:17:48.815 Set Features Save Field: Not Supported 00:17:48.815 Reservations: Not Supported 00:17:48.815 Timestamp: Not Supported 00:17:48.815 Copy: Supported 00:17:48.815 Volatile Write Cache: Present 00:17:48.815 Atomic Write Unit (Normal): 1 00:17:48.815 Atomic Write Unit (PFail): 1 00:17:48.815 Atomic Compare & Write Unit: 1 00:17:48.815 Fused Compare & Write: Supported 00:17:48.815 Scatter-Gather List 00:17:48.815 SGL Command Set: Supported (Dword aligned) 00:17:48.815 SGL Keyed: Not Supported 00:17:48.815 SGL Bit Bucket Descriptor: Not Supported 00:17:48.815 SGL Metadata Pointer: Not Supported 00:17:48.815 Oversized SGL: Not Supported 00:17:48.815 SGL Metadata Address: Not Supported 00:17:48.815 SGL Offset: Not Supported 00:17:48.815 Transport SGL Data Block: Not Supported 00:17:48.815 Replay Protected Memory Block: Not Supported 00:17:48.815 00:17:48.815 Firmware Slot Information 00:17:48.815 ========================= 00:17:48.815 Active slot: 1 00:17:48.815 Slot 1 Firmware Revision: 25.01 00:17:48.815 00:17:48.815 00:17:48.815 Commands Supported and Effects 00:17:48.815 ============================== 00:17:48.815 Admin Commands 00:17:48.815 -------------- 00:17:48.815 Get Log Page (02h): Supported 00:17:48.815 Identify (06h): Supported 00:17:48.815 Abort (08h): Supported 00:17:48.815 Set Features (09h): Supported 00:17:48.815 Get Features (0Ah): Supported 00:17:48.815 Asynchronous Event Request (0Ch): Supported 00:17:48.815 Keep Alive (18h): Supported 00:17:48.815 I/O Commands 00:17:48.815 ------------ 00:17:48.815 Flush (00h): Supported LBA-Change 00:17:48.815 Write (01h): Supported LBA-Change 00:17:48.815 Read (02h): Supported 00:17:48.815 Compare (05h): Supported 00:17:48.815 Write Zeroes (08h): Supported LBA-Change 00:17:48.815 Dataset Management (09h): Supported LBA-Change 00:17:48.815 Copy (19h): Supported LBA-Change 00:17:48.815 00:17:48.815 Error Log 00:17:48.815 ========= 00:17:48.815 00:17:48.815 Arbitration 00:17:48.815 =========== 00:17:48.815 Arbitration Burst: 1 00:17:48.815 00:17:48.815 Power Management 00:17:48.815 ================ 00:17:48.815 Number of Power States: 1 00:17:48.815 Current Power State: Power State #0 00:17:48.815 Power State #0: 00:17:48.815 Max Power: 0.00 W 00:17:48.815 Non-Operational State: Operational 00:17:48.815 Entry Latency: Not Reported 00:17:48.815 Exit Latency: Not Reported 00:17:48.815 Relative Read Throughput: 0 00:17:48.815 Relative Read Latency: 0 00:17:48.815 Relative Write Throughput: 0 00:17:48.815 Relative Write Latency: 0 00:17:48.815 Idle Power: Not Reported 00:17:48.815 Active Power: Not Reported 00:17:48.815 Non-Operational Permissive Mode: Not Supported 00:17:48.815 00:17:48.815 Health Information 00:17:48.815 ================== 00:17:48.815 Critical Warnings: 00:17:48.815 Available Spare Space: OK 00:17:48.815 Temperature: OK 00:17:48.815 Device Reliability: OK 00:17:48.815 Read Only: No 00:17:48.815 Volatile Memory Backup: OK 00:17:48.815 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:48.815 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:48.815 Available Spare: 0% 00:17:48.815 Available Sp[2024-10-06 11:12:46.219949] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:48.815 [2024-10-06 11:12:46.219958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:48.815 [2024-10-06 11:12:46.219981] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:17:48.815 [2024-10-06 11:12:46.219990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:48.815 [2024-10-06 11:12:46.219996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:48.815 [2024-10-06 11:12:46.220001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:48.815 [2024-10-06 11:12:46.220008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:48.815 [2024-10-06 11:12:46.220137] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:48.815 [2024-10-06 11:12:46.220147] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:48.815 [2024-10-06 11:12:46.221143] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:48.815 [2024-10-06 11:12:46.221191] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:17:48.815 [2024-10-06 11:12:46.221198] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:17:48.815 [2024-10-06 11:12:46.222147] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:48.815 [2024-10-06 11:12:46.222156] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:17:48.815 [2024-10-06 11:12:46.222209] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:48.815 [2024-10-06 11:12:46.225065] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:48.815 are Threshold: 0% 00:17:48.815 Life Percentage Used: 0% 00:17:48.815 Data Units Read: 0 00:17:48.816 Data Units Written: 0 00:17:48.816 Host Read Commands: 0 00:17:48.816 Host Write Commands: 0 00:17:48.816 Controller Busy Time: 0 minutes 00:17:48.816 Power Cycles: 0 00:17:48.816 Power On Hours: 0 hours 00:17:48.816 Unsafe Shutdowns: 0 00:17:48.816 Unrecoverable Media Errors: 0 00:17:48.816 Lifetime Error Log Entries: 0 00:17:48.816 Warning Temperature Time: 0 minutes 00:17:48.816 Critical Temperature Time: 0 minutes 00:17:48.816 00:17:48.816 Number of Queues 00:17:48.816 ================ 00:17:48.816 Number of I/O Submission Queues: 127 00:17:48.816 Number of I/O Completion Queues: 127 00:17:48.816 00:17:48.816 Active Namespaces 00:17:48.816 ================= 00:17:48.816 Namespace ID:1 00:17:48.816 Error Recovery Timeout: Unlimited 00:17:48.816 Command Set Identifier: NVM (00h) 00:17:48.816 Deallocate: Supported 00:17:48.816 Deallocated/Unwritten Error: Not Supported 00:17:48.816 Deallocated Read Value: Unknown 00:17:48.816 Deallocate in Write Zeroes: Not Supported 00:17:48.816 Deallocated Guard Field: 0xFFFF 00:17:48.816 Flush: Supported 00:17:48.816 Reservation: Supported 00:17:48.816 Namespace Sharing Capabilities: Multiple Controllers 00:17:48.816 Size (in LBAs): 131072 (0GiB) 00:17:48.816 Capacity (in LBAs): 131072 (0GiB) 00:17:48.816 Utilization (in LBAs): 131072 (0GiB) 00:17:48.816 NGUID: 7D6FC28C9F3E41028418370B192D1703 00:17:48.816 UUID: 7d6fc28c-9f3e-4102-8418-370b192d1703 00:17:48.816 Thin Provisioning: Not Supported 00:17:48.816 Per-NS Atomic Units: Yes 00:17:48.816 Atomic Boundary Size (Normal): 0 00:17:48.816 Atomic Boundary Size (PFail): 0 00:17:48.816 Atomic Boundary Offset: 0 00:17:48.816 Maximum Single Source Range Length: 65535 00:17:48.816 Maximum Copy Length: 65535 00:17:48.816 Maximum Source Range Count: 1 00:17:48.816 NGUID/EUI64 Never Reused: No 00:17:48.816 Namespace Write Protected: No 00:17:48.816 Number of LBA Formats: 1 00:17:48.816 Current LBA Format: LBA Format #00 00:17:48.816 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:48.816 00:17:48.816 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:49.074 [2024-10-06 11:12:46.437124] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:54.345 Initializing NVMe Controllers 00:17:54.345 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:54.345 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:54.345 Initialization complete. Launching workers. 00:17:54.345 ======================================================== 00:17:54.345 Latency(us) 00:17:54.345 Device Information : IOPS MiB/s Average min max 00:17:54.345 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39938.40 156.01 3205.52 933.65 7163.37 00:17:54.345 ======================================================== 00:17:54.346 Total : 39938.40 156.01 3205.52 933.65 7163.37 00:17:54.346 00:17:54.346 [2024-10-06 11:12:51.462725] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:54.346 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:54.346 [2024-10-06 11:12:51.678726] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:59.612 Initializing NVMe Controllers 00:17:59.612 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:59.612 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:59.612 Initialization complete. Launching workers. 00:17:59.612 ======================================================== 00:17:59.612 Latency(us) 00:17:59.612 Device Information : IOPS MiB/s Average min max 00:17:59.612 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16041.00 62.66 7989.71 5991.97 15443.74 00:17:59.612 ======================================================== 00:17:59.612 Total : 16041.00 62.66 7989.71 5991.97 15443.74 00:17:59.612 00:17:59.612 [2024-10-06 11:12:56.719926] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:59.612 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:59.612 [2024-10-06 11:12:56.913875] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:04.883 [2024-10-06 11:13:01.974317] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:04.883 Initializing NVMe Controllers 00:18:04.883 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:04.883 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:04.883 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:04.883 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:04.883 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:04.883 Initialization complete. Launching workers. 00:18:04.883 Starting thread on core 2 00:18:04.883 Starting thread on core 3 00:18:04.883 Starting thread on core 1 00:18:04.883 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:04.883 [2024-10-06 11:13:02.252463] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:08.174 [2024-10-06 11:13:05.322012] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:08.174 Initializing NVMe Controllers 00:18:08.174 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:08.174 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:08.174 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:08.174 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:08.174 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:08.174 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:08.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:08.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:08.174 Initialization complete. Launching workers. 00:18:08.174 Starting thread on core 1 with urgent priority queue 00:18:08.174 Starting thread on core 2 with urgent priority queue 00:18:08.174 Starting thread on core 3 with urgent priority queue 00:18:08.174 Starting thread on core 0 with urgent priority queue 00:18:08.174 SPDK bdev Controller (SPDK1 ) core 0: 8142.00 IO/s 12.28 secs/100000 ios 00:18:08.174 SPDK bdev Controller (SPDK1 ) core 1: 8019.00 IO/s 12.47 secs/100000 ios 00:18:08.174 SPDK bdev Controller (SPDK1 ) core 2: 11098.33 IO/s 9.01 secs/100000 ios 00:18:08.174 SPDK bdev Controller (SPDK1 ) core 3: 7935.67 IO/s 12.60 secs/100000 ios 00:18:08.174 ======================================================== 00:18:08.174 00:18:08.174 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:08.174 [2024-10-06 11:13:05.587937] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:08.174 Initializing NVMe Controllers 00:18:08.174 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:08.174 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:08.174 Namespace ID: 1 size: 0GB 00:18:08.174 Initialization complete. 00:18:08.174 INFO: using host memory buffer for IO 00:18:08.174 Hello world! 00:18:08.174 [2024-10-06 11:13:05.623153] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:08.174 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:08.433 [2024-10-06 11:13:05.884162] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:09.371 Initializing NVMe Controllers 00:18:09.371 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:09.371 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:09.371 Initialization complete. Launching workers. 00:18:09.371 submit (in ns) avg, min, max = 8451.3, 3143.8, 4996117.1 00:18:09.371 complete (in ns) avg, min, max = 19512.9, 1719.0, 4000345.7 00:18:09.371 00:18:09.371 Submit histogram 00:18:09.371 ================ 00:18:09.371 Range in us Cumulative Count 00:18:09.371 3.139 - 3.154: 0.0060% ( 1) 00:18:09.371 3.154 - 3.170: 0.0239% ( 3) 00:18:09.371 3.170 - 3.185: 0.0299% ( 1) 00:18:09.371 3.185 - 3.200: 0.0957% ( 11) 00:18:09.371 3.200 - 3.215: 1.0348% ( 157) 00:18:09.371 3.215 - 3.230: 4.6594% ( 606) 00:18:09.371 3.230 - 3.246: 10.3475% ( 951) 00:18:09.371 3.246 - 3.261: 16.5740% ( 1041) 00:18:09.371 3.261 - 3.276: 24.1462% ( 1266) 00:18:09.371 3.276 - 3.291: 31.0844% ( 1160) 00:18:09.371 3.291 - 3.307: 37.1434% ( 1013) 00:18:09.371 3.307 - 3.322: 42.4966% ( 895) 00:18:09.371 3.322 - 3.337: 47.5806% ( 850) 00:18:09.371 3.337 - 3.352: 51.5581% ( 665) 00:18:09.371 3.352 - 3.368: 55.6672% ( 687) 00:18:09.371 3.368 - 3.383: 62.2884% ( 1107) 00:18:09.371 3.383 - 3.398: 67.9227% ( 942) 00:18:09.371 3.398 - 3.413: 73.3417% ( 906) 00:18:09.371 3.413 - 3.429: 79.3050% ( 997) 00:18:09.371 3.429 - 3.444: 83.3902% ( 683) 00:18:09.371 3.444 - 3.459: 85.7288% ( 391) 00:18:09.371 3.459 - 3.474: 86.8892% ( 194) 00:18:09.371 3.474 - 3.490: 87.5650% ( 113) 00:18:09.371 3.490 - 3.505: 87.9060% ( 57) 00:18:09.371 3.505 - 3.520: 88.3905% ( 81) 00:18:09.371 3.520 - 3.535: 89.0962% ( 118) 00:18:09.371 3.535 - 3.550: 90.0592% ( 161) 00:18:09.371 3.550 - 3.566: 91.1598% ( 184) 00:18:09.371 3.566 - 3.581: 92.1706% ( 169) 00:18:09.371 3.581 - 3.596: 93.1156% ( 158) 00:18:09.371 3.596 - 3.611: 93.9171% ( 134) 00:18:09.371 3.611 - 3.627: 94.7904% ( 146) 00:18:09.371 3.627 - 3.642: 95.6098% ( 137) 00:18:09.371 3.642 - 3.657: 96.3515% ( 124) 00:18:09.371 3.657 - 3.672: 97.1948% ( 141) 00:18:09.371 3.672 - 3.688: 97.7511% ( 93) 00:18:09.371 3.688 - 3.703: 98.1937% ( 74) 00:18:09.371 3.703 - 3.718: 98.5585% ( 61) 00:18:09.371 3.718 - 3.733: 98.8636% ( 51) 00:18:09.371 3.733 - 3.749: 99.1088% ( 41) 00:18:09.371 3.749 - 3.764: 99.3002% ( 32) 00:18:09.371 3.764 - 3.779: 99.4198% ( 20) 00:18:09.371 3.779 - 3.794: 99.4856% ( 11) 00:18:09.371 3.794 - 3.810: 99.5394% ( 9) 00:18:09.371 3.810 - 3.825: 99.5574% ( 3) 00:18:09.371 3.840 - 3.855: 99.5753% ( 3) 00:18:09.371 3.855 - 3.870: 99.5813% ( 1) 00:18:09.371 3.870 - 3.886: 99.5873% ( 1) 00:18:09.371 3.901 - 3.931: 99.5933% ( 1) 00:18:09.371 3.962 - 3.992: 99.5993% ( 1) 00:18:09.371 5.090 - 5.120: 99.6052% ( 1) 00:18:09.371 5.150 - 5.181: 99.6172% ( 2) 00:18:09.371 5.211 - 5.242: 99.6292% ( 2) 00:18:09.371 5.242 - 5.272: 99.6351% ( 1) 00:18:09.372 5.364 - 5.394: 99.6411% ( 1) 00:18:09.372 5.394 - 5.425: 99.6471% ( 1) 00:18:09.372 5.425 - 5.455: 99.6591% ( 2) 00:18:09.372 5.547 - 5.577: 99.6710% ( 2) 00:18:09.372 5.577 - 5.608: 99.6830% ( 2) 00:18:09.372 5.608 - 5.638: 99.6950% ( 2) 00:18:09.372 5.699 - 5.730: 99.7009% ( 1) 00:18:09.372 5.851 - 5.882: 99.7069% ( 1) 00:18:09.372 6.156 - 6.187: 99.7129% ( 1) 00:18:09.372 6.278 - 6.309: 99.7189% ( 1) 00:18:09.372 6.400 - 6.430: 99.7249% ( 1) 00:18:09.372 6.522 - 6.552: 99.7308% ( 1) 00:18:09.372 6.583 - 6.613: 99.7428% ( 2) 00:18:09.372 6.705 - 6.735: 99.7548% ( 2) 00:18:09.372 6.735 - 6.766: 99.7608% ( 1) 00:18:09.372 6.766 - 6.796: 99.7667% ( 1) 00:18:09.372 6.888 - 6.918: 99.7787% ( 2) 00:18:09.372 6.918 - 6.949: 99.7847% ( 1) 00:18:09.372 7.223 - 7.253: 99.7907% ( 1) 00:18:09.372 7.253 - 7.284: 99.7966% ( 1) 00:18:09.372 7.528 - 7.558: 99.8026% ( 1) 00:18:09.372 7.589 - 7.619: 99.8086% ( 1) 00:18:09.372 7.619 - 7.650: 99.8146% ( 1) 00:18:09.372 7.680 - 7.710: 99.8206% ( 1) 00:18:09.372 7.741 - 7.771: 99.8265% ( 1) 00:18:09.372 [2024-10-06 11:13:06.905152] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:09.372 8.046 - 8.107: 99.8325% ( 1) 00:18:09.372 8.107 - 8.168: 99.8445% ( 2) 00:18:09.372 8.168 - 8.229: 99.8505% ( 1) 00:18:09.372 9.021 - 9.082: 99.8624% ( 2) 00:18:09.372 9.204 - 9.265: 99.8684% ( 1) 00:18:09.372 9.265 - 9.326: 99.8744% ( 1) 00:18:09.372 3994.575 - 4025.783: 99.9940% ( 20) 00:18:09.372 4993.219 - 5024.427: 100.0000% ( 1) 00:18:09.372 00:18:09.372 Complete histogram 00:18:09.372 ================== 00:18:09.372 Range in us Cumulative Count 00:18:09.372 1.714 - 1.722: 0.0060% ( 1) 00:18:09.372 1.745 - 1.752: 0.0359% ( 5) 00:18:09.372 1.752 - 1.760: 0.1675% ( 22) 00:18:09.372 1.760 - 1.768: 0.4247% ( 43) 00:18:09.372 1.768 - 1.775: 0.8493% ( 71) 00:18:09.372 1.775 - 1.783: 1.5910% ( 124) 00:18:09.372 1.783 - 1.790: 2.4762% ( 148) 00:18:09.372 1.790 - 1.798: 3.1401% ( 111) 00:18:09.372 1.798 - 1.806: 5.6881% ( 426) 00:18:09.372 1.806 - 1.813: 19.0681% ( 2237) 00:18:09.372 1.813 - 1.821: 44.3687% ( 4230) 00:18:09.372 1.821 - 1.829: 68.8139% ( 4087) 00:18:09.372 1.829 - 1.836: 84.8197% ( 2676) 00:18:09.372 1.836 - 1.844: 92.5893% ( 1299) 00:18:09.372 1.844 - 1.851: 95.4124% ( 472) 00:18:09.372 1.851 - 1.859: 96.6386% ( 205) 00:18:09.372 1.859 - 1.867: 97.1350% ( 83) 00:18:09.372 1.867 - 1.874: 97.5537% ( 70) 00:18:09.372 1.874 - 1.882: 97.8767% ( 54) 00:18:09.372 1.882 - 1.890: 98.1817% ( 51) 00:18:09.372 1.890 - 1.897: 98.5047% ( 54) 00:18:09.372 1.897 - 1.905: 98.8516% ( 58) 00:18:09.372 1.905 - 1.912: 99.0669% ( 36) 00:18:09.372 1.912 - 1.920: 99.1686% ( 17) 00:18:09.372 1.920 - 1.928: 99.2284% ( 10) 00:18:09.372 1.928 - 1.935: 99.2583% ( 5) 00:18:09.372 1.935 - 1.943: 99.2882% ( 5) 00:18:09.372 1.943 - 1.950: 99.2942% ( 1) 00:18:09.372 1.950 - 1.966: 99.3181% ( 4) 00:18:09.372 1.981 - 1.996: 99.3241% ( 1) 00:18:09.372 1.996 - 2.011: 99.3301% ( 1) 00:18:09.372 2.011 - 2.027: 99.3361% ( 1) 00:18:09.372 2.210 - 2.225: 99.3421% ( 1) 00:18:09.372 3.657 - 3.672: 99.3480% ( 1) 00:18:09.372 3.672 - 3.688: 99.3540% ( 1) 00:18:09.372 3.733 - 3.749: 99.3600% ( 1) 00:18:09.372 3.749 - 3.764: 99.3660% ( 1) 00:18:09.372 3.825 - 3.840: 99.3720% ( 1) 00:18:09.372 3.840 - 3.855: 99.3780% ( 1) 00:18:09.372 3.855 - 3.870: 99.3839% ( 1) 00:18:09.372 3.931 - 3.962: 99.3899% ( 1) 00:18:09.372 3.962 - 3.992: 99.3959% ( 1) 00:18:09.372 3.992 - 4.023: 99.4019% ( 1) 00:18:09.372 4.175 - 4.206: 99.4079% ( 1) 00:18:09.372 4.389 - 4.419: 99.4138% ( 1) 00:18:09.372 4.480 - 4.510: 99.4258% ( 2) 00:18:09.372 4.571 - 4.602: 99.4318% ( 1) 00:18:09.372 4.754 - 4.785: 99.4378% ( 1) 00:18:09.372 5.059 - 5.090: 99.4437% ( 1) 00:18:09.372 5.303 - 5.333: 99.4497% ( 1) 00:18:09.372 5.394 - 5.425: 99.4557% ( 1) 00:18:09.372 5.455 - 5.486: 99.4617% ( 1) 00:18:09.372 5.638 - 5.669: 99.4677% ( 1) 00:18:09.372 5.882 - 5.912: 99.4737% ( 1) 00:18:09.372 5.912 - 5.943: 99.4796% ( 1) 00:18:09.372 6.065 - 6.095: 99.4856% ( 1) 00:18:09.372 6.187 - 6.217: 99.4916% ( 1) 00:18:09.372 6.278 - 6.309: 99.4976% ( 1) 00:18:09.372 6.309 - 6.339: 99.5036% ( 1) 00:18:09.372 6.339 - 6.370: 99.5095% ( 1) 00:18:09.372 6.644 - 6.674: 99.5155% ( 1) 00:18:09.372 6.918 - 6.949: 99.5215% ( 1) 00:18:09.372 7.589 - 7.619: 99.5275% ( 1) 00:18:09.372 9.874 - 9.935: 99.5335% ( 1) 00:18:09.372 12.861 - 12.922: 99.5394% ( 1) 00:18:09.372 13.531 - 13.592: 99.5454% ( 1) 00:18:09.372 13.714 - 13.775: 99.5514% ( 1) 00:18:09.372 14.385 - 14.446: 99.5574% ( 1) 00:18:09.372 3994.575 - 4025.783: 100.0000% ( 74) 00:18:09.372 00:18:09.372 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:09.372 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:09.631 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:09.631 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:09.631 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:09.631 [ 00:18:09.631 { 00:18:09.631 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:09.631 "subtype": "Discovery", 00:18:09.631 "listen_addresses": [], 00:18:09.631 "allow_any_host": true, 00:18:09.631 "hosts": [] 00:18:09.631 }, 00:18:09.631 { 00:18:09.631 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:09.631 "subtype": "NVMe", 00:18:09.631 "listen_addresses": [ 00:18:09.631 { 00:18:09.631 "trtype": "VFIOUSER", 00:18:09.631 "adrfam": "IPv4", 00:18:09.631 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:09.631 "trsvcid": "0" 00:18:09.631 } 00:18:09.631 ], 00:18:09.631 "allow_any_host": true, 00:18:09.631 "hosts": [], 00:18:09.631 "serial_number": "SPDK1", 00:18:09.631 "model_number": "SPDK bdev Controller", 00:18:09.631 "max_namespaces": 32, 00:18:09.631 "min_cntlid": 1, 00:18:09.631 "max_cntlid": 65519, 00:18:09.631 "namespaces": [ 00:18:09.631 { 00:18:09.631 "nsid": 1, 00:18:09.631 "bdev_name": "Malloc1", 00:18:09.631 "name": "Malloc1", 00:18:09.631 "nguid": "7D6FC28C9F3E41028418370B192D1703", 00:18:09.631 "uuid": "7d6fc28c-9f3e-4102-8418-370b192d1703" 00:18:09.631 } 00:18:09.631 ] 00:18:09.631 }, 00:18:09.631 { 00:18:09.631 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:09.631 "subtype": "NVMe", 00:18:09.631 "listen_addresses": [ 00:18:09.631 { 00:18:09.631 "trtype": "VFIOUSER", 00:18:09.631 "adrfam": "IPv4", 00:18:09.631 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:09.631 "trsvcid": "0" 00:18:09.631 } 00:18:09.631 ], 00:18:09.631 "allow_any_host": true, 00:18:09.631 "hosts": [], 00:18:09.631 "serial_number": "SPDK2", 00:18:09.631 "model_number": "SPDK bdev Controller", 00:18:09.631 "max_namespaces": 32, 00:18:09.631 "min_cntlid": 1, 00:18:09.631 "max_cntlid": 65519, 00:18:09.631 "namespaces": [ 00:18:09.631 { 00:18:09.631 "nsid": 1, 00:18:09.631 "bdev_name": "Malloc2", 00:18:09.631 "name": "Malloc2", 00:18:09.631 "nguid": "002B7FBE5D2549828126F71C50F69F8D", 00:18:09.631 "uuid": "002b7fbe-5d25-4982-8126-f71c50f69f8d" 00:18:09.631 } 00:18:09.631 ] 00:18:09.631 } 00:18:09.631 ] 00:18:09.631 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:09.631 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:09.631 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2038282 00:18:09.631 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:09.631 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:09.631 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:09.631 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:09.631 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:09.631 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:09.631 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:09.890 [2024-10-06 11:13:07.271578] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:09.890 Malloc3 00:18:09.890 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:10.148 [2024-10-06 11:13:07.523440] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:10.148 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:10.148 Asynchronous Event Request test 00:18:10.148 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:10.148 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:10.148 Registering asynchronous event callbacks... 00:18:10.148 Starting namespace attribute notice tests for all controllers... 00:18:10.148 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:10.148 aer_cb - Changed Namespace 00:18:10.148 Cleaning up... 00:18:10.148 [ 00:18:10.148 { 00:18:10.148 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:10.148 "subtype": "Discovery", 00:18:10.148 "listen_addresses": [], 00:18:10.148 "allow_any_host": true, 00:18:10.148 "hosts": [] 00:18:10.148 }, 00:18:10.148 { 00:18:10.148 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:10.148 "subtype": "NVMe", 00:18:10.148 "listen_addresses": [ 00:18:10.148 { 00:18:10.148 "trtype": "VFIOUSER", 00:18:10.148 "adrfam": "IPv4", 00:18:10.148 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:10.148 "trsvcid": "0" 00:18:10.148 } 00:18:10.148 ], 00:18:10.148 "allow_any_host": true, 00:18:10.148 "hosts": [], 00:18:10.148 "serial_number": "SPDK1", 00:18:10.148 "model_number": "SPDK bdev Controller", 00:18:10.148 "max_namespaces": 32, 00:18:10.148 "min_cntlid": 1, 00:18:10.148 "max_cntlid": 65519, 00:18:10.148 "namespaces": [ 00:18:10.148 { 00:18:10.148 "nsid": 1, 00:18:10.148 "bdev_name": "Malloc1", 00:18:10.148 "name": "Malloc1", 00:18:10.148 "nguid": "7D6FC28C9F3E41028418370B192D1703", 00:18:10.149 "uuid": "7d6fc28c-9f3e-4102-8418-370b192d1703" 00:18:10.149 }, 00:18:10.149 { 00:18:10.149 "nsid": 2, 00:18:10.149 "bdev_name": "Malloc3", 00:18:10.149 "name": "Malloc3", 00:18:10.149 "nguid": "E1DECDBC757243A98FF181AA9555396F", 00:18:10.149 "uuid": "e1decdbc-7572-43a9-8ff1-81aa9555396f" 00:18:10.149 } 00:18:10.149 ] 00:18:10.149 }, 00:18:10.149 { 00:18:10.149 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:10.149 "subtype": "NVMe", 00:18:10.149 "listen_addresses": [ 00:18:10.149 { 00:18:10.149 "trtype": "VFIOUSER", 00:18:10.149 "adrfam": "IPv4", 00:18:10.149 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:10.149 "trsvcid": "0" 00:18:10.149 } 00:18:10.149 ], 00:18:10.149 "allow_any_host": true, 00:18:10.149 "hosts": [], 00:18:10.149 "serial_number": "SPDK2", 00:18:10.149 "model_number": "SPDK bdev Controller", 00:18:10.149 "max_namespaces": 32, 00:18:10.149 "min_cntlid": 1, 00:18:10.149 "max_cntlid": 65519, 00:18:10.149 "namespaces": [ 00:18:10.149 { 00:18:10.149 "nsid": 1, 00:18:10.149 "bdev_name": "Malloc2", 00:18:10.149 "name": "Malloc2", 00:18:10.149 "nguid": "002B7FBE5D2549828126F71C50F69F8D", 00:18:10.149 "uuid": "002b7fbe-5d25-4982-8126-f71c50f69f8d" 00:18:10.149 } 00:18:10.149 ] 00:18:10.149 } 00:18:10.149 ] 00:18:10.411 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2038282 00:18:10.411 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:10.411 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:10.411 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:10.411 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:10.411 [2024-10-06 11:13:07.763381] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:18:10.411 [2024-10-06 11:13:07.763429] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2038298 ] 00:18:10.411 [2024-10-06 11:13:07.791154] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:10.411 [2024-10-06 11:13:07.799284] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:10.411 [2024-10-06 11:13:07.799305] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0f28e48000 00:18:10.411 [2024-10-06 11:13:07.800286] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.411 [2024-10-06 11:13:07.801300] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.411 [2024-10-06 11:13:07.802303] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.411 [2024-10-06 11:13:07.803310] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:10.411 [2024-10-06 11:13:07.804312] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:10.411 [2024-10-06 11:13:07.805322] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.411 [2024-10-06 11:13:07.806324] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:10.411 [2024-10-06 11:13:07.807334] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.411 [2024-10-06 11:13:07.808344] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:10.411 [2024-10-06 11:13:07.808354] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0f27b51000 00:18:10.411 [2024-10-06 11:13:07.809265] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:10.411 [2024-10-06 11:13:07.823261] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:10.411 [2024-10-06 11:13:07.823296] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:18:10.411 [2024-10-06 11:13:07.825354] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:10.411 [2024-10-06 11:13:07.825389] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:10.411 [2024-10-06 11:13:07.825457] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:18:10.411 [2024-10-06 11:13:07.825472] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:18:10.411 [2024-10-06 11:13:07.825477] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:18:10.411 [2024-10-06 11:13:07.826355] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:10.411 [2024-10-06 11:13:07.826364] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:18:10.411 [2024-10-06 11:13:07.826371] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:18:10.411 [2024-10-06 11:13:07.827355] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:10.411 [2024-10-06 11:13:07.827365] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:18:10.411 [2024-10-06 11:13:07.827372] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:18:10.411 [2024-10-06 11:13:07.828364] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:10.411 [2024-10-06 11:13:07.828373] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:10.411 [2024-10-06 11:13:07.829373] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:10.411 [2024-10-06 11:13:07.829382] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:18:10.411 [2024-10-06 11:13:07.829387] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:18:10.411 [2024-10-06 11:13:07.829392] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:10.411 [2024-10-06 11:13:07.829497] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:18:10.411 [2024-10-06 11:13:07.829502] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:10.411 [2024-10-06 11:13:07.829507] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:10.411 [2024-10-06 11:13:07.830381] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:10.411 [2024-10-06 11:13:07.831388] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:10.411 [2024-10-06 11:13:07.832401] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:10.411 [2024-10-06 11:13:07.833404] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:10.411 [2024-10-06 11:13:07.833444] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:10.411 [2024-10-06 11:13:07.834421] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:10.411 [2024-10-06 11:13:07.834430] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:10.411 [2024-10-06 11:13:07.834434] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:18:10.411 [2024-10-06 11:13:07.834451] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:18:10.411 [2024-10-06 11:13:07.834458] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.834468] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:10.412 [2024-10-06 11:13:07.834472] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:10.412 [2024-10-06 11:13:07.834476] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.412 [2024-10-06 11:13:07.834487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:10.412 [2024-10-06 11:13:07.841068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:10.412 [2024-10-06 11:13:07.841079] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:18:10.412 [2024-10-06 11:13:07.841084] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:18:10.412 [2024-10-06 11:13:07.841087] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:18:10.412 [2024-10-06 11:13:07.841092] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:10.412 [2024-10-06 11:13:07.841096] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:18:10.412 [2024-10-06 11:13:07.841100] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:18:10.412 [2024-10-06 11:13:07.841104] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.841111] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.841121] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:10.412 [2024-10-06 11:13:07.849066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:10.412 [2024-10-06 11:13:07.849078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.412 [2024-10-06 11:13:07.849086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.412 [2024-10-06 11:13:07.849093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.412 [2024-10-06 11:13:07.849100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.412 [2024-10-06 11:13:07.849105] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.849115] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.849124] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:10.412 [2024-10-06 11:13:07.857065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:10.412 [2024-10-06 11:13:07.857073] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:18:10.412 [2024-10-06 11:13:07.857078] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.857084] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.857092] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.857100] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:10.412 [2024-10-06 11:13:07.865065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:10.412 [2024-10-06 11:13:07.865118] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.865126] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.865133] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:10.412 [2024-10-06 11:13:07.865137] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:10.412 [2024-10-06 11:13:07.865140] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.412 [2024-10-06 11:13:07.865146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:10.412 [2024-10-06 11:13:07.873076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:10.412 [2024-10-06 11:13:07.873087] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:18:10.412 [2024-10-06 11:13:07.873099] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.873106] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.873112] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:10.412 [2024-10-06 11:13:07.873116] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:10.412 [2024-10-06 11:13:07.873119] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.412 [2024-10-06 11:13:07.873125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:10.412 [2024-10-06 11:13:07.881067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:10.412 [2024-10-06 11:13:07.881081] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.881090] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.881097] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:10.412 [2024-10-06 11:13:07.881101] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:10.412 [2024-10-06 11:13:07.881104] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.412 [2024-10-06 11:13:07.881110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:10.412 [2024-10-06 11:13:07.889064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:10.412 [2024-10-06 11:13:07.889074] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.889080] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.889089] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.889095] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.889099] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.889104] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.889108] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:18:10.412 [2024-10-06 11:13:07.889112] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:18:10.412 [2024-10-06 11:13:07.889117] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:18:10.412 [2024-10-06 11:13:07.889133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:10.412 [2024-10-06 11:13:07.897066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:10.412 [2024-10-06 11:13:07.897079] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:10.412 [2024-10-06 11:13:07.905067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:10.412 [2024-10-06 11:13:07.905079] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:10.412 [2024-10-06 11:13:07.913069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:10.412 [2024-10-06 11:13:07.913083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:10.412 [2024-10-06 11:13:07.921065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:10.412 [2024-10-06 11:13:07.921082] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:10.412 [2024-10-06 11:13:07.921086] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:10.412 [2024-10-06 11:13:07.921090] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:10.412 [2024-10-06 11:13:07.921095] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:10.412 [2024-10-06 11:13:07.921098] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:10.412 [2024-10-06 11:13:07.921104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:10.412 [2024-10-06 11:13:07.921110] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:10.412 [2024-10-06 11:13:07.921114] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:10.412 [2024-10-06 11:13:07.921117] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.412 [2024-10-06 11:13:07.921123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:10.412 [2024-10-06 11:13:07.921129] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:10.413 [2024-10-06 11:13:07.921132] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:10.413 [2024-10-06 11:13:07.921135] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.413 [2024-10-06 11:13:07.921141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:10.413 [2024-10-06 11:13:07.921147] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:10.413 [2024-10-06 11:13:07.921151] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:10.413 [2024-10-06 11:13:07.921154] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.413 [2024-10-06 11:13:07.921159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:10.413 [2024-10-06 11:13:07.929067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:10.413 [2024-10-06 11:13:07.929082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:10.413 [2024-10-06 11:13:07.929091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:10.413 [2024-10-06 11:13:07.929097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:10.413 ===================================================== 00:18:10.413 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:10.413 ===================================================== 00:18:10.413 Controller Capabilities/Features 00:18:10.413 ================================ 00:18:10.413 Vendor ID: 4e58 00:18:10.413 Subsystem Vendor ID: 4e58 00:18:10.413 Serial Number: SPDK2 00:18:10.413 Model Number: SPDK bdev Controller 00:18:10.413 Firmware Version: 25.01 00:18:10.413 Recommended Arb Burst: 6 00:18:10.413 IEEE OUI Identifier: 8d 6b 50 00:18:10.413 Multi-path I/O 00:18:10.413 May have multiple subsystem ports: Yes 00:18:10.413 May have multiple controllers: Yes 00:18:10.413 Associated with SR-IOV VF: No 00:18:10.413 Max Data Transfer Size: 131072 00:18:10.413 Max Number of Namespaces: 32 00:18:10.413 Max Number of I/O Queues: 127 00:18:10.413 NVMe Specification Version (VS): 1.3 00:18:10.413 NVMe Specification Version (Identify): 1.3 00:18:10.413 Maximum Queue Entries: 256 00:18:10.413 Contiguous Queues Required: Yes 00:18:10.413 Arbitration Mechanisms Supported 00:18:10.413 Weighted Round Robin: Not Supported 00:18:10.413 Vendor Specific: Not Supported 00:18:10.413 Reset Timeout: 15000 ms 00:18:10.413 Doorbell Stride: 4 bytes 00:18:10.413 NVM Subsystem Reset: Not Supported 00:18:10.413 Command Sets Supported 00:18:10.413 NVM Command Set: Supported 00:18:10.413 Boot Partition: Not Supported 00:18:10.413 Memory Page Size Minimum: 4096 bytes 00:18:10.413 Memory Page Size Maximum: 4096 bytes 00:18:10.413 Persistent Memory Region: Not Supported 00:18:10.413 Optional Asynchronous Events Supported 00:18:10.413 Namespace Attribute Notices: Supported 00:18:10.413 Firmware Activation Notices: Not Supported 00:18:10.413 ANA Change Notices: Not Supported 00:18:10.413 PLE Aggregate Log Change Notices: Not Supported 00:18:10.413 LBA Status Info Alert Notices: Not Supported 00:18:10.413 EGE Aggregate Log Change Notices: Not Supported 00:18:10.413 Normal NVM Subsystem Shutdown event: Not Supported 00:18:10.413 Zone Descriptor Change Notices: Not Supported 00:18:10.413 Discovery Log Change Notices: Not Supported 00:18:10.413 Controller Attributes 00:18:10.413 128-bit Host Identifier: Supported 00:18:10.413 Non-Operational Permissive Mode: Not Supported 00:18:10.413 NVM Sets: Not Supported 00:18:10.413 Read Recovery Levels: Not Supported 00:18:10.413 Endurance Groups: Not Supported 00:18:10.413 Predictable Latency Mode: Not Supported 00:18:10.413 Traffic Based Keep ALive: Not Supported 00:18:10.413 Namespace Granularity: Not Supported 00:18:10.413 SQ Associations: Not Supported 00:18:10.413 UUID List: Not Supported 00:18:10.413 Multi-Domain Subsystem: Not Supported 00:18:10.413 Fixed Capacity Management: Not Supported 00:18:10.413 Variable Capacity Management: Not Supported 00:18:10.413 Delete Endurance Group: Not Supported 00:18:10.413 Delete NVM Set: Not Supported 00:18:10.413 Extended LBA Formats Supported: Not Supported 00:18:10.413 Flexible Data Placement Supported: Not Supported 00:18:10.413 00:18:10.413 Controller Memory Buffer Support 00:18:10.413 ================================ 00:18:10.413 Supported: No 00:18:10.413 00:18:10.413 Persistent Memory Region Support 00:18:10.413 ================================ 00:18:10.413 Supported: No 00:18:10.413 00:18:10.413 Admin Command Set Attributes 00:18:10.413 ============================ 00:18:10.413 Security Send/Receive: Not Supported 00:18:10.413 Format NVM: Not Supported 00:18:10.413 Firmware Activate/Download: Not Supported 00:18:10.413 Namespace Management: Not Supported 00:18:10.413 Device Self-Test: Not Supported 00:18:10.413 Directives: Not Supported 00:18:10.413 NVMe-MI: Not Supported 00:18:10.413 Virtualization Management: Not Supported 00:18:10.413 Doorbell Buffer Config: Not Supported 00:18:10.413 Get LBA Status Capability: Not Supported 00:18:10.413 Command & Feature Lockdown Capability: Not Supported 00:18:10.413 Abort Command Limit: 4 00:18:10.413 Async Event Request Limit: 4 00:18:10.413 Number of Firmware Slots: N/A 00:18:10.413 Firmware Slot 1 Read-Only: N/A 00:18:10.413 Firmware Activation Without Reset: N/A 00:18:10.413 Multiple Update Detection Support: N/A 00:18:10.413 Firmware Update Granularity: No Information Provided 00:18:10.413 Per-Namespace SMART Log: No 00:18:10.413 Asymmetric Namespace Access Log Page: Not Supported 00:18:10.413 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:10.413 Command Effects Log Page: Supported 00:18:10.413 Get Log Page Extended Data: Supported 00:18:10.413 Telemetry Log Pages: Not Supported 00:18:10.413 Persistent Event Log Pages: Not Supported 00:18:10.413 Supported Log Pages Log Page: May Support 00:18:10.413 Commands Supported & Effects Log Page: Not Supported 00:18:10.413 Feature Identifiers & Effects Log Page:May Support 00:18:10.413 NVMe-MI Commands & Effects Log Page: May Support 00:18:10.413 Data Area 4 for Telemetry Log: Not Supported 00:18:10.413 Error Log Page Entries Supported: 128 00:18:10.413 Keep Alive: Supported 00:18:10.413 Keep Alive Granularity: 10000 ms 00:18:10.413 00:18:10.413 NVM Command Set Attributes 00:18:10.413 ========================== 00:18:10.413 Submission Queue Entry Size 00:18:10.413 Max: 64 00:18:10.413 Min: 64 00:18:10.413 Completion Queue Entry Size 00:18:10.413 Max: 16 00:18:10.413 Min: 16 00:18:10.413 Number of Namespaces: 32 00:18:10.413 Compare Command: Supported 00:18:10.413 Write Uncorrectable Command: Not Supported 00:18:10.413 Dataset Management Command: Supported 00:18:10.413 Write Zeroes Command: Supported 00:18:10.413 Set Features Save Field: Not Supported 00:18:10.413 Reservations: Not Supported 00:18:10.413 Timestamp: Not Supported 00:18:10.413 Copy: Supported 00:18:10.413 Volatile Write Cache: Present 00:18:10.413 Atomic Write Unit (Normal): 1 00:18:10.413 Atomic Write Unit (PFail): 1 00:18:10.413 Atomic Compare & Write Unit: 1 00:18:10.413 Fused Compare & Write: Supported 00:18:10.413 Scatter-Gather List 00:18:10.413 SGL Command Set: Supported (Dword aligned) 00:18:10.413 SGL Keyed: Not Supported 00:18:10.413 SGL Bit Bucket Descriptor: Not Supported 00:18:10.413 SGL Metadata Pointer: Not Supported 00:18:10.413 Oversized SGL: Not Supported 00:18:10.413 SGL Metadata Address: Not Supported 00:18:10.413 SGL Offset: Not Supported 00:18:10.413 Transport SGL Data Block: Not Supported 00:18:10.413 Replay Protected Memory Block: Not Supported 00:18:10.413 00:18:10.413 Firmware Slot Information 00:18:10.413 ========================= 00:18:10.413 Active slot: 1 00:18:10.413 Slot 1 Firmware Revision: 25.01 00:18:10.413 00:18:10.413 00:18:10.413 Commands Supported and Effects 00:18:10.413 ============================== 00:18:10.413 Admin Commands 00:18:10.413 -------------- 00:18:10.413 Get Log Page (02h): Supported 00:18:10.413 Identify (06h): Supported 00:18:10.413 Abort (08h): Supported 00:18:10.413 Set Features (09h): Supported 00:18:10.413 Get Features (0Ah): Supported 00:18:10.413 Asynchronous Event Request (0Ch): Supported 00:18:10.413 Keep Alive (18h): Supported 00:18:10.414 I/O Commands 00:18:10.414 ------------ 00:18:10.414 Flush (00h): Supported LBA-Change 00:18:10.414 Write (01h): Supported LBA-Change 00:18:10.414 Read (02h): Supported 00:18:10.414 Compare (05h): Supported 00:18:10.414 Write Zeroes (08h): Supported LBA-Change 00:18:10.414 Dataset Management (09h): Supported LBA-Change 00:18:10.414 Copy (19h): Supported LBA-Change 00:18:10.414 00:18:10.414 Error Log 00:18:10.414 ========= 00:18:10.414 00:18:10.414 Arbitration 00:18:10.414 =========== 00:18:10.414 Arbitration Burst: 1 00:18:10.414 00:18:10.414 Power Management 00:18:10.414 ================ 00:18:10.414 Number of Power States: 1 00:18:10.414 Current Power State: Power State #0 00:18:10.414 Power State #0: 00:18:10.414 Max Power: 0.00 W 00:18:10.414 Non-Operational State: Operational 00:18:10.414 Entry Latency: Not Reported 00:18:10.414 Exit Latency: Not Reported 00:18:10.414 Relative Read Throughput: 0 00:18:10.414 Relative Read Latency: 0 00:18:10.414 Relative Write Throughput: 0 00:18:10.414 Relative Write Latency: 0 00:18:10.414 Idle Power: Not Reported 00:18:10.414 Active Power: Not Reported 00:18:10.414 Non-Operational Permissive Mode: Not Supported 00:18:10.414 00:18:10.414 Health Information 00:18:10.414 ================== 00:18:10.414 Critical Warnings: 00:18:10.414 Available Spare Space: OK 00:18:10.414 Temperature: OK 00:18:10.414 Device Reliability: OK 00:18:10.414 Read Only: No 00:18:10.414 Volatile Memory Backup: OK 00:18:10.414 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:10.414 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:10.414 Available Spare: 0% 00:18:10.414 Available Sp[2024-10-06 11:13:07.929178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:10.414 [2024-10-06 11:13:07.937065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:10.414 [2024-10-06 11:13:07.937094] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:18:10.414 [2024-10-06 11:13:07.937102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.414 [2024-10-06 11:13:07.937108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.414 [2024-10-06 11:13:07.937113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.414 [2024-10-06 11:13:07.937119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.414 [2024-10-06 11:13:07.937161] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:10.414 [2024-10-06 11:13:07.937171] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:10.414 [2024-10-06 11:13:07.938161] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:10.414 [2024-10-06 11:13:07.938205] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:18:10.414 [2024-10-06 11:13:07.938211] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:18:10.414 [2024-10-06 11:13:07.939170] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:10.414 [2024-10-06 11:13:07.939181] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:18:10.414 [2024-10-06 11:13:07.939229] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:10.414 [2024-10-06 11:13:07.942067] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:10.414 are Threshold: 0% 00:18:10.414 Life Percentage Used: 0% 00:18:10.414 Data Units Read: 0 00:18:10.414 Data Units Written: 0 00:18:10.414 Host Read Commands: 0 00:18:10.414 Host Write Commands: 0 00:18:10.414 Controller Busy Time: 0 minutes 00:18:10.414 Power Cycles: 0 00:18:10.414 Power On Hours: 0 hours 00:18:10.414 Unsafe Shutdowns: 0 00:18:10.414 Unrecoverable Media Errors: 0 00:18:10.414 Lifetime Error Log Entries: 0 00:18:10.414 Warning Temperature Time: 0 minutes 00:18:10.414 Critical Temperature Time: 0 minutes 00:18:10.414 00:18:10.414 Number of Queues 00:18:10.414 ================ 00:18:10.414 Number of I/O Submission Queues: 127 00:18:10.414 Number of I/O Completion Queues: 127 00:18:10.414 00:18:10.414 Active Namespaces 00:18:10.414 ================= 00:18:10.414 Namespace ID:1 00:18:10.414 Error Recovery Timeout: Unlimited 00:18:10.414 Command Set Identifier: NVM (00h) 00:18:10.414 Deallocate: Supported 00:18:10.414 Deallocated/Unwritten Error: Not Supported 00:18:10.414 Deallocated Read Value: Unknown 00:18:10.414 Deallocate in Write Zeroes: Not Supported 00:18:10.414 Deallocated Guard Field: 0xFFFF 00:18:10.414 Flush: Supported 00:18:10.414 Reservation: Supported 00:18:10.414 Namespace Sharing Capabilities: Multiple Controllers 00:18:10.414 Size (in LBAs): 131072 (0GiB) 00:18:10.414 Capacity (in LBAs): 131072 (0GiB) 00:18:10.414 Utilization (in LBAs): 131072 (0GiB) 00:18:10.414 NGUID: 002B7FBE5D2549828126F71C50F69F8D 00:18:10.414 UUID: 002b7fbe-5d25-4982-8126-f71c50f69f8d 00:18:10.414 Thin Provisioning: Not Supported 00:18:10.414 Per-NS Atomic Units: Yes 00:18:10.414 Atomic Boundary Size (Normal): 0 00:18:10.414 Atomic Boundary Size (PFail): 0 00:18:10.414 Atomic Boundary Offset: 0 00:18:10.414 Maximum Single Source Range Length: 65535 00:18:10.414 Maximum Copy Length: 65535 00:18:10.414 Maximum Source Range Count: 1 00:18:10.414 NGUID/EUI64 Never Reused: No 00:18:10.414 Namespace Write Protected: No 00:18:10.414 Number of LBA Formats: 1 00:18:10.414 Current LBA Format: LBA Format #00 00:18:10.414 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:10.414 00:18:10.414 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:10.673 [2024-10-06 11:13:08.149171] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:15.978 Initializing NVMe Controllers 00:18:15.978 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:15.978 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:15.978 Initialization complete. Launching workers. 00:18:15.978 ======================================================== 00:18:15.978 Latency(us) 00:18:15.978 Device Information : IOPS MiB/s Average min max 00:18:15.978 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39944.78 156.03 3204.03 945.59 7630.76 00:18:15.978 ======================================================== 00:18:15.978 Total : 39944.78 156.03 3204.03 945.59 7630.76 00:18:15.978 00:18:15.978 [2024-10-06 11:13:13.256314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:15.978 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:15.978 [2024-10-06 11:13:13.474959] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:21.251 Initializing NVMe Controllers 00:18:21.251 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:21.251 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:21.251 Initialization complete. Launching workers. 00:18:21.251 ======================================================== 00:18:21.251 Latency(us) 00:18:21.251 Device Information : IOPS MiB/s Average min max 00:18:21.251 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39918.63 155.93 3206.35 952.30 6669.73 00:18:21.251 ======================================================== 00:18:21.251 Total : 39918.63 155.93 3206.35 952.30 6669.73 00:18:21.251 00:18:21.251 [2024-10-06 11:13:18.496209] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:21.251 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:21.251 [2024-10-06 11:13:18.682296] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:26.643 [2024-10-06 11:13:23.815154] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:26.643 Initializing NVMe Controllers 00:18:26.643 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:26.643 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:26.643 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:26.643 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:26.643 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:26.643 Initialization complete. Launching workers. 00:18:26.643 Starting thread on core 2 00:18:26.643 Starting thread on core 3 00:18:26.643 Starting thread on core 1 00:18:26.643 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:26.643 [2024-10-06 11:13:24.096435] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:29.937 [2024-10-06 11:13:27.147425] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:29.937 Initializing NVMe Controllers 00:18:29.937 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:29.937 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:29.937 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:29.937 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:29.937 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:29.937 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:29.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:29.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:29.937 Initialization complete. Launching workers. 00:18:29.937 Starting thread on core 1 with urgent priority queue 00:18:29.937 Starting thread on core 2 with urgent priority queue 00:18:29.937 Starting thread on core 3 with urgent priority queue 00:18:29.937 Starting thread on core 0 with urgent priority queue 00:18:29.937 SPDK bdev Controller (SPDK2 ) core 0: 8897.33 IO/s 11.24 secs/100000 ios 00:18:29.937 SPDK bdev Controller (SPDK2 ) core 1: 7775.33 IO/s 12.86 secs/100000 ios 00:18:29.937 SPDK bdev Controller (SPDK2 ) core 2: 9440.67 IO/s 10.59 secs/100000 ios 00:18:29.937 SPDK bdev Controller (SPDK2 ) core 3: 7225.33 IO/s 13.84 secs/100000 ios 00:18:29.937 ======================================================== 00:18:29.937 00:18:29.937 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:29.937 [2024-10-06 11:13:27.419461] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:29.937 Initializing NVMe Controllers 00:18:29.937 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:29.937 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:29.937 Namespace ID: 1 size: 0GB 00:18:29.937 Initialization complete. 00:18:29.937 INFO: using host memory buffer for IO 00:18:29.937 Hello world! 00:18:29.937 [2024-10-06 11:13:27.430526] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:29.937 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:30.197 [2024-10-06 11:13:27.690372] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:31.579 Initializing NVMe Controllers 00:18:31.579 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:31.579 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:31.579 Initialization complete. Launching workers. 00:18:31.579 submit (in ns) avg, min, max = 6465.7, 3174.3, 3998757.1 00:18:31.579 complete (in ns) avg, min, max = 22007.2, 1748.6, 3999598.1 00:18:31.579 00:18:31.579 Submit histogram 00:18:31.579 ================ 00:18:31.579 Range in us Cumulative Count 00:18:31.579 3.170 - 3.185: 0.0357% ( 6) 00:18:31.579 3.185 - 3.200: 0.2318% ( 33) 00:18:31.579 3.200 - 3.215: 1.1292% ( 151) 00:18:31.579 3.215 - 3.230: 4.1008% ( 500) 00:18:31.579 3.230 - 3.246: 9.7052% ( 943) 00:18:31.579 3.246 - 3.261: 15.5414% ( 982) 00:18:31.579 3.261 - 3.276: 23.0596% ( 1265) 00:18:31.579 3.276 - 3.291: 30.7857% ( 1300) 00:18:31.579 3.291 - 3.307: 37.2935% ( 1095) 00:18:31.579 3.307 - 3.322: 42.1431% ( 816) 00:18:31.579 3.322 - 3.337: 47.0641% ( 828) 00:18:31.579 3.337 - 3.352: 51.5393% ( 753) 00:18:31.579 3.352 - 3.368: 55.2003% ( 616) 00:18:31.579 3.368 - 3.383: 61.1375% ( 999) 00:18:31.579 3.383 - 3.398: 67.9900% ( 1153) 00:18:31.579 3.398 - 3.413: 72.7624% ( 803) 00:18:31.579 3.413 - 3.429: 78.3549% ( 941) 00:18:31.579 3.429 - 3.444: 82.4438% ( 688) 00:18:31.579 3.444 - 3.459: 85.2252% ( 468) 00:18:31.579 3.459 - 3.474: 86.5922% ( 230) 00:18:31.579 3.474 - 3.490: 87.1924% ( 101) 00:18:31.579 3.490 - 3.505: 87.5312% ( 57) 00:18:31.579 3.505 - 3.520: 88.0067% ( 80) 00:18:31.579 3.520 - 3.535: 88.7317% ( 122) 00:18:31.579 3.535 - 3.550: 89.6589% ( 156) 00:18:31.579 3.550 - 3.566: 90.7286% ( 180) 00:18:31.579 3.566 - 3.581: 91.6914% ( 162) 00:18:31.579 3.581 - 3.596: 92.6423% ( 160) 00:18:31.579 3.596 - 3.611: 93.4625% ( 138) 00:18:31.579 3.611 - 3.627: 94.1816% ( 121) 00:18:31.579 3.627 - 3.642: 95.1206% ( 158) 00:18:31.579 3.642 - 3.657: 96.1013% ( 165) 00:18:31.579 3.657 - 3.672: 97.0165% ( 154) 00:18:31.579 3.672 - 3.688: 97.6703% ( 110) 00:18:31.579 3.688 - 3.703: 98.1398% ( 79) 00:18:31.579 3.703 - 3.718: 98.5499% ( 69) 00:18:31.579 3.718 - 3.733: 98.8589% ( 52) 00:18:31.579 3.733 - 3.749: 99.1085% ( 42) 00:18:31.579 3.749 - 3.764: 99.2928% ( 31) 00:18:31.579 3.764 - 3.779: 99.4413% ( 25) 00:18:31.579 3.779 - 3.794: 99.5067% ( 11) 00:18:31.579 3.794 - 3.810: 99.5364% ( 5) 00:18:31.579 3.810 - 3.825: 99.5661% ( 5) 00:18:31.579 3.825 - 3.840: 99.5721% ( 1) 00:18:31.579 3.840 - 3.855: 99.5780% ( 1) 00:18:31.579 3.855 - 3.870: 99.5840% ( 1) 00:18:31.579 3.886 - 3.901: 99.5899% ( 1) 00:18:31.579 5.272 - 5.303: 99.5959% ( 1) 00:18:31.579 5.333 - 5.364: 99.6018% ( 1) 00:18:31.579 5.394 - 5.425: 99.6077% ( 1) 00:18:31.579 5.455 - 5.486: 99.6137% ( 1) 00:18:31.579 5.486 - 5.516: 99.6196% ( 1) 00:18:31.579 5.577 - 5.608: 99.6315% ( 2) 00:18:31.579 5.699 - 5.730: 99.6375% ( 1) 00:18:31.579 5.882 - 5.912: 99.6434% ( 1) 00:18:31.579 6.004 - 6.034: 99.6494% ( 1) 00:18:31.579 6.065 - 6.095: 99.6612% ( 2) 00:18:31.579 6.217 - 6.248: 99.6672% ( 1) 00:18:31.579 6.278 - 6.309: 99.6791% ( 2) 00:18:31.579 6.309 - 6.339: 99.6850% ( 1) 00:18:31.579 6.339 - 6.370: 99.6910% ( 1) 00:18:31.579 6.370 - 6.400: 99.6969% ( 1) 00:18:31.579 6.461 - 6.491: 99.7028% ( 1) 00:18:31.579 6.552 - 6.583: 99.7088% ( 1) 00:18:31.579 6.613 - 6.644: 99.7147% ( 1) 00:18:31.579 6.644 - 6.674: 99.7326% ( 3) 00:18:31.579 6.674 - 6.705: 99.7444% ( 2) 00:18:31.579 6.735 - 6.766: 99.7504% ( 1) 00:18:31.579 6.857 - 6.888: 99.7623% ( 2) 00:18:31.579 6.949 - 6.979: 99.7801% ( 3) 00:18:31.579 6.979 - 7.010: 99.7920% ( 2) 00:18:31.579 7.010 - 7.040: 99.8039% ( 2) 00:18:31.579 7.040 - 7.070: 99.8098% ( 1) 00:18:31.579 7.192 - 7.223: 99.8158% ( 1) 00:18:31.579 7.223 - 7.253: 99.8217% ( 1) 00:18:31.579 7.253 - 7.284: 99.8276% ( 1) 00:18:31.579 7.406 - 7.436: 99.8336% ( 1) 00:18:31.579 7.467 - 7.497: 99.8455% ( 2) 00:18:31.579 7.558 - 7.589: 99.8514% ( 1) 00:18:31.579 [2024-10-06 11:13:28.784050] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:31.579 7.680 - 7.710: 99.8574% ( 1) 00:18:31.579 7.710 - 7.741: 99.8633% ( 1) 00:18:31.579 7.802 - 7.863: 99.8692% ( 1) 00:18:31.579 7.863 - 7.924: 99.8752% ( 1) 00:18:31.579 7.924 - 7.985: 99.8811% ( 1) 00:18:31.579 7.985 - 8.046: 99.8871% ( 1) 00:18:31.579 8.229 - 8.290: 99.8930% ( 1) 00:18:31.579 8.716 - 8.777: 99.8990% ( 1) 00:18:31.579 8.899 - 8.960: 99.9109% ( 2) 00:18:31.579 9.509 - 9.570: 99.9168% ( 1) 00:18:31.579 13.349 - 13.410: 99.9227% ( 1) 00:18:31.579 3994.575 - 4025.783: 100.0000% ( 13) 00:18:31.579 00:18:31.579 Complete histogram 00:18:31.579 ================== 00:18:31.579 Range in us Cumulative Count 00:18:31.579 1.745 - 1.752: 0.0119% ( 2) 00:18:31.579 1.752 - 1.760: 0.5884% ( 97) 00:18:31.579 1.760 - 1.768: 3.0132% ( 408) 00:18:31.579 1.768 - 1.775: 7.4052% ( 739) 00:18:31.579 1.775 - 1.783: 10.6205% ( 541) 00:18:31.579 1.783 - 1.790: 12.2786% ( 279) 00:18:31.579 1.790 - 1.798: 13.6574% ( 232) 00:18:31.579 1.798 - 1.806: 14.5846% ( 156) 00:18:31.579 1.806 - 1.813: 16.4448% ( 313) 00:18:31.579 1.813 - 1.821: 28.0578% ( 1954) 00:18:31.579 1.821 - 1.829: 55.2122% ( 4569) 00:18:31.579 1.829 - 1.836: 80.9580% ( 4332) 00:18:31.579 1.836 - 1.844: 91.7211% ( 1811) 00:18:31.579 1.844 - 1.851: 94.9186% ( 538) 00:18:31.579 1.851 - 1.859: 96.4697% ( 261) 00:18:31.579 1.859 - 1.867: 97.4444% ( 164) 00:18:31.579 1.867 - 1.874: 97.8664% ( 71) 00:18:31.579 1.874 - 1.882: 98.0506% ( 31) 00:18:31.579 1.882 - 1.890: 98.2765% ( 38) 00:18:31.579 1.890 - 1.897: 98.5677% ( 49) 00:18:31.579 1.897 - 1.905: 98.8351% ( 45) 00:18:31.579 1.905 - 1.912: 99.0313% ( 33) 00:18:31.579 1.912 - 1.920: 99.2333% ( 34) 00:18:31.579 1.920 - 1.928: 99.2809% ( 8) 00:18:31.579 1.928 - 1.935: 99.2987% ( 3) 00:18:31.579 1.935 - 1.943: 99.3106% ( 2) 00:18:31.579 1.943 - 1.950: 99.3165% ( 1) 00:18:31.579 1.950 - 1.966: 99.3284% ( 2) 00:18:31.579 1.966 - 1.981: 99.3403% ( 2) 00:18:31.579 1.981 - 1.996: 99.3522% ( 2) 00:18:31.579 1.996 - 2.011: 99.3581% ( 1) 00:18:31.579 2.149 - 2.164: 99.3641% ( 1) 00:18:31.579 2.225 - 2.240: 99.3700% ( 1) 00:18:31.579 3.611 - 3.627: 99.3760% ( 1) 00:18:31.579 3.962 - 3.992: 99.3819% ( 1) 00:18:31.579 4.114 - 4.145: 99.3879% ( 1) 00:18:31.579 4.510 - 4.541: 99.3938% ( 1) 00:18:31.579 4.724 - 4.754: 99.4057% ( 2) 00:18:31.579 4.754 - 4.785: 99.4116% ( 1) 00:18:31.579 5.090 - 5.120: 99.4176% ( 1) 00:18:31.579 5.150 - 5.181: 99.4235% ( 1) 00:18:31.579 5.242 - 5.272: 99.4295% ( 1) 00:18:31.579 5.333 - 5.364: 99.4354% ( 1) 00:18:31.579 5.364 - 5.394: 99.4413% ( 1) 00:18:31.579 5.577 - 5.608: 99.4473% ( 1) 00:18:31.579 5.760 - 5.790: 99.4592% ( 2) 00:18:31.579 5.851 - 5.882: 99.4651% ( 1) 00:18:31.579 6.034 - 6.065: 99.4711% ( 1) 00:18:31.579 6.065 - 6.095: 99.4770% ( 1) 00:18:31.579 6.339 - 6.370: 99.4829% ( 1) 00:18:31.579 6.583 - 6.613: 99.4889% ( 1) 00:18:31.579 6.796 - 6.827: 99.4948% ( 1) 00:18:31.579 3994.575 - 4025.783: 100.0000% ( 85) 00:18:31.579 00:18:31.580 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:31.580 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:31.580 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:31.580 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:31.580 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:31.580 [ 00:18:31.580 { 00:18:31.580 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:31.580 "subtype": "Discovery", 00:18:31.580 "listen_addresses": [], 00:18:31.580 "allow_any_host": true, 00:18:31.580 "hosts": [] 00:18:31.580 }, 00:18:31.580 { 00:18:31.580 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:31.580 "subtype": "NVMe", 00:18:31.580 "listen_addresses": [ 00:18:31.580 { 00:18:31.580 "trtype": "VFIOUSER", 00:18:31.580 "adrfam": "IPv4", 00:18:31.580 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:31.580 "trsvcid": "0" 00:18:31.580 } 00:18:31.580 ], 00:18:31.580 "allow_any_host": true, 00:18:31.580 "hosts": [], 00:18:31.580 "serial_number": "SPDK1", 00:18:31.580 "model_number": "SPDK bdev Controller", 00:18:31.580 "max_namespaces": 32, 00:18:31.580 "min_cntlid": 1, 00:18:31.580 "max_cntlid": 65519, 00:18:31.580 "namespaces": [ 00:18:31.580 { 00:18:31.580 "nsid": 1, 00:18:31.580 "bdev_name": "Malloc1", 00:18:31.580 "name": "Malloc1", 00:18:31.580 "nguid": "7D6FC28C9F3E41028418370B192D1703", 00:18:31.580 "uuid": "7d6fc28c-9f3e-4102-8418-370b192d1703" 00:18:31.580 }, 00:18:31.580 { 00:18:31.580 "nsid": 2, 00:18:31.580 "bdev_name": "Malloc3", 00:18:31.580 "name": "Malloc3", 00:18:31.580 "nguid": "E1DECDBC757243A98FF181AA9555396F", 00:18:31.580 "uuid": "e1decdbc-7572-43a9-8ff1-81aa9555396f" 00:18:31.580 } 00:18:31.580 ] 00:18:31.580 }, 00:18:31.580 { 00:18:31.580 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:31.580 "subtype": "NVMe", 00:18:31.580 "listen_addresses": [ 00:18:31.580 { 00:18:31.580 "trtype": "VFIOUSER", 00:18:31.580 "adrfam": "IPv4", 00:18:31.580 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:31.580 "trsvcid": "0" 00:18:31.580 } 00:18:31.580 ], 00:18:31.580 "allow_any_host": true, 00:18:31.580 "hosts": [], 00:18:31.580 "serial_number": "SPDK2", 00:18:31.580 "model_number": "SPDK bdev Controller", 00:18:31.580 "max_namespaces": 32, 00:18:31.580 "min_cntlid": 1, 00:18:31.580 "max_cntlid": 65519, 00:18:31.580 "namespaces": [ 00:18:31.580 { 00:18:31.580 "nsid": 1, 00:18:31.580 "bdev_name": "Malloc2", 00:18:31.580 "name": "Malloc2", 00:18:31.580 "nguid": "002B7FBE5D2549828126F71C50F69F8D", 00:18:31.580 "uuid": "002b7fbe-5d25-4982-8126-f71c50f69f8d" 00:18:31.580 } 00:18:31.580 ] 00:18:31.580 } 00:18:31.580 ] 00:18:31.580 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:31.580 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2041858 00:18:31.580 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:31.580 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:31.580 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:31.580 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:31.580 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:31.580 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:31.580 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:31.580 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:31.840 [2024-10-06 11:13:29.170600] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:31.840 Malloc4 00:18:31.840 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:32.101 [2024-10-06 11:13:29.422447] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:32.101 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:32.101 Asynchronous Event Request test 00:18:32.101 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:32.101 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:32.101 Registering asynchronous event callbacks... 00:18:32.101 Starting namespace attribute notice tests for all controllers... 00:18:32.101 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:32.101 aer_cb - Changed Namespace 00:18:32.101 Cleaning up... 00:18:32.101 [ 00:18:32.101 { 00:18:32.101 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:32.101 "subtype": "Discovery", 00:18:32.101 "listen_addresses": [], 00:18:32.101 "allow_any_host": true, 00:18:32.101 "hosts": [] 00:18:32.101 }, 00:18:32.101 { 00:18:32.101 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:32.101 "subtype": "NVMe", 00:18:32.101 "listen_addresses": [ 00:18:32.101 { 00:18:32.101 "trtype": "VFIOUSER", 00:18:32.101 "adrfam": "IPv4", 00:18:32.101 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:32.101 "trsvcid": "0" 00:18:32.101 } 00:18:32.101 ], 00:18:32.101 "allow_any_host": true, 00:18:32.101 "hosts": [], 00:18:32.101 "serial_number": "SPDK1", 00:18:32.101 "model_number": "SPDK bdev Controller", 00:18:32.101 "max_namespaces": 32, 00:18:32.101 "min_cntlid": 1, 00:18:32.101 "max_cntlid": 65519, 00:18:32.101 "namespaces": [ 00:18:32.101 { 00:18:32.101 "nsid": 1, 00:18:32.101 "bdev_name": "Malloc1", 00:18:32.101 "name": "Malloc1", 00:18:32.101 "nguid": "7D6FC28C9F3E41028418370B192D1703", 00:18:32.101 "uuid": "7d6fc28c-9f3e-4102-8418-370b192d1703" 00:18:32.101 }, 00:18:32.101 { 00:18:32.101 "nsid": 2, 00:18:32.101 "bdev_name": "Malloc3", 00:18:32.101 "name": "Malloc3", 00:18:32.101 "nguid": "E1DECDBC757243A98FF181AA9555396F", 00:18:32.101 "uuid": "e1decdbc-7572-43a9-8ff1-81aa9555396f" 00:18:32.101 } 00:18:32.101 ] 00:18:32.101 }, 00:18:32.101 { 00:18:32.101 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:32.101 "subtype": "NVMe", 00:18:32.101 "listen_addresses": [ 00:18:32.101 { 00:18:32.101 "trtype": "VFIOUSER", 00:18:32.101 "adrfam": "IPv4", 00:18:32.101 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:32.101 "trsvcid": "0" 00:18:32.101 } 00:18:32.101 ], 00:18:32.101 "allow_any_host": true, 00:18:32.101 "hosts": [], 00:18:32.101 "serial_number": "SPDK2", 00:18:32.101 "model_number": "SPDK bdev Controller", 00:18:32.101 "max_namespaces": 32, 00:18:32.101 "min_cntlid": 1, 00:18:32.101 "max_cntlid": 65519, 00:18:32.101 "namespaces": [ 00:18:32.101 { 00:18:32.101 "nsid": 1, 00:18:32.101 "bdev_name": "Malloc2", 00:18:32.101 "name": "Malloc2", 00:18:32.101 "nguid": "002B7FBE5D2549828126F71C50F69F8D", 00:18:32.101 "uuid": "002b7fbe-5d25-4982-8126-f71c50f69f8d" 00:18:32.101 }, 00:18:32.101 { 00:18:32.101 "nsid": 2, 00:18:32.101 "bdev_name": "Malloc4", 00:18:32.101 "name": "Malloc4", 00:18:32.101 "nguid": "CB3FC427428C4316B695F79C8E0506D5", 00:18:32.101 "uuid": "cb3fc427-428c-4316-b695-f79c8e0506d5" 00:18:32.101 } 00:18:32.101 ] 00:18:32.101 } 00:18:32.101 ] 00:18:32.101 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2041858 00:18:32.101 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:32.101 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2034358 00:18:32.101 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2034358 ']' 00:18:32.101 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2034358 00:18:32.101 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:32.101 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:32.101 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2034358 00:18:32.361 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:32.361 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:32.361 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2034358' 00:18:32.361 killing process with pid 2034358 00:18:32.361 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2034358 00:18:32.361 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2034358 00:18:32.621 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:32.621 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:32.621 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:32.621 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:32.621 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:32.621 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2041884 00:18:32.621 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2041884' 00:18:32.621 Process pid: 2041884 00:18:32.621 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:32.621 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:32.621 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2041884 00:18:32.621 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2041884 ']' 00:18:32.621 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.621 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:32.622 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.622 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:32.622 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:32.622 [2024-10-06 11:13:29.991623] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:32.622 [2024-10-06 11:13:29.992519] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:18:32.622 [2024-10-06 11:13:29.992558] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.622 [2024-10-06 11:13:30.052934] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:32.622 [2024-10-06 11:13:30.095985] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.622 [2024-10-06 11:13:30.096029] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.622 [2024-10-06 11:13:30.096036] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.622 [2024-10-06 11:13:30.096044] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.622 [2024-10-06 11:13:30.096050] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.622 [2024-10-06 11:13:30.097531] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.622 [2024-10-06 11:13:30.097552] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.622 [2024-10-06 11:13:30.097636] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.622 [2024-10-06 11:13:30.097638] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.622 [2024-10-06 11:13:30.174115] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:32.622 [2024-10-06 11:13:30.174199] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:32.622 [2024-10-06 11:13:30.174412] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:32.622 [2024-10-06 11:13:30.174754] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:32.622 [2024-10-06 11:13:30.175032] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:32.883 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:32.883 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:32.883 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:33.824 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:33.824 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:33.824 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:34.084 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:34.084 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:34.084 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:34.084 Malloc1 00:18:34.084 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:34.344 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:34.604 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:34.604 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:34.604 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:34.604 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:34.864 Malloc2 00:18:34.864 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:35.123 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:35.383 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:35.642 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:35.642 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2041884 00:18:35.642 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2041884 ']' 00:18:35.642 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2041884 00:18:35.642 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:35.642 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:35.642 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2041884 00:18:35.642 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:35.642 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:35.642 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2041884' 00:18:35.642 killing process with pid 2041884 00:18:35.642 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2041884 00:18:35.642 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2041884 00:18:35.902 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:35.902 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:35.902 00:18:35.902 real 0m50.396s 00:18:35.902 user 3m15.148s 00:18:35.902 sys 0m3.215s 00:18:35.902 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:35.902 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:35.902 ************************************ 00:18:35.902 END TEST nvmf_vfio_user 00:18:35.902 ************************************ 00:18:35.902 11:13:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:35.902 11:13:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:35.902 11:13:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:35.902 11:13:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:35.902 ************************************ 00:18:35.902 START TEST nvmf_vfio_user_nvme_compliance 00:18:35.902 ************************************ 00:18:35.902 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:35.902 * Looking for test storage... 00:18:35.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:35.902 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:35.902 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:18:35.902 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:35.902 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:35.902 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:35.902 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:35.902 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:35.902 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.162 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:36.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.163 --rc genhtml_branch_coverage=1 00:18:36.163 --rc genhtml_function_coverage=1 00:18:36.163 --rc genhtml_legend=1 00:18:36.163 --rc geninfo_all_blocks=1 00:18:36.163 --rc geninfo_unexecuted_blocks=1 00:18:36.163 00:18:36.163 ' 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:36.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.163 --rc genhtml_branch_coverage=1 00:18:36.163 --rc genhtml_function_coverage=1 00:18:36.163 --rc genhtml_legend=1 00:18:36.163 --rc geninfo_all_blocks=1 00:18:36.163 --rc geninfo_unexecuted_blocks=1 00:18:36.163 00:18:36.163 ' 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:36.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.163 --rc genhtml_branch_coverage=1 00:18:36.163 --rc genhtml_function_coverage=1 00:18:36.163 --rc genhtml_legend=1 00:18:36.163 --rc geninfo_all_blocks=1 00:18:36.163 --rc geninfo_unexecuted_blocks=1 00:18:36.163 00:18:36.163 ' 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:36.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.163 --rc genhtml_branch_coverage=1 00:18:36.163 --rc genhtml_function_coverage=1 00:18:36.163 --rc genhtml_legend=1 00:18:36.163 --rc geninfo_all_blocks=1 00:18:36.163 --rc geninfo_unexecuted_blocks=1 00:18:36.163 00:18:36.163 ' 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:36.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:36.163 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:36.164 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:36.164 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:36.164 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:36.164 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2042630 00:18:36.164 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:36.164 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2042630' 00:18:36.164 Process pid: 2042630 00:18:36.164 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:36.164 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2042630 00:18:36.164 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 2042630 ']' 00:18:36.164 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.164 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:36.164 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.164 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:36.164 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:36.164 [2024-10-06 11:13:33.562013] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:18:36.164 [2024-10-06 11:13:33.562068] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.164 [2024-10-06 11:13:33.612450] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:36.164 [2024-10-06 11:13:33.652133] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.164 [2024-10-06 11:13:33.652173] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.164 [2024-10-06 11:13:33.652180] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.164 [2024-10-06 11:13:33.652186] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.164 [2024-10-06 11:13:33.652191] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.164 [2024-10-06 11:13:33.653101] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.164 [2024-10-06 11:13:33.656072] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.164 [2024-10-06 11:13:33.656074] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.425 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:36.425 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:18:36.425 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:37.364 malloc0 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.364 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:37.364 00:18:37.364 00:18:37.364 CUnit - A unit testing framework for C - Version 2.1-3 00:18:37.364 http://cunit.sourceforge.net/ 00:18:37.364 00:18:37.364 00:18:37.364 Suite: nvme_compliance 00:18:37.625 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-06 11:13:34.958616] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:37.625 [2024-10-06 11:13:34.959949] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:37.625 [2024-10-06 11:13:34.959965] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:37.625 [2024-10-06 11:13:34.959971] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:37.625 [2024-10-06 11:13:34.961637] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:37.625 passed 00:18:37.625 Test: admin_identify_ctrlr_verify_fused ...[2024-10-06 11:13:35.042220] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:37.625 [2024-10-06 11:13:35.045246] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:37.625 passed 00:18:37.625 Test: admin_identify_ns ...[2024-10-06 11:13:35.125257] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:37.625 [2024-10-06 11:13:35.186075] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:37.625 [2024-10-06 11:13:35.194072] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:37.885 [2024-10-06 11:13:35.215170] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:37.885 passed 00:18:37.885 Test: admin_get_features_mandatory_features ...[2024-10-06 11:13:35.291146] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:37.885 [2024-10-06 11:13:35.294166] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:37.885 passed 00:18:37.885 Test: admin_get_features_optional_features ...[2024-10-06 11:13:35.372671] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:37.885 [2024-10-06 11:13:35.375688] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:37.885 passed 00:18:37.885 Test: admin_set_features_number_of_queues ...[2024-10-06 11:13:35.453329] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.144 [2024-10-06 11:13:35.558153] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.144 passed 00:18:38.144 Test: admin_get_log_page_mandatory_logs ...[2024-10-06 11:13:35.637655] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.144 [2024-10-06 11:13:35.640675] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.144 passed 00:18:38.144 Test: admin_get_log_page_with_lpo ...[2024-10-06 11:13:35.718314] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.404 [2024-10-06 11:13:35.786072] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:38.404 [2024-10-06 11:13:35.799147] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.404 passed 00:18:38.404 Test: fabric_property_get ...[2024-10-06 11:13:35.876197] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.404 [2024-10-06 11:13:35.877425] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:38.404 [2024-10-06 11:13:35.879216] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.404 passed 00:18:38.404 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-06 11:13:35.956724] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.404 [2024-10-06 11:13:35.957945] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:38.404 [2024-10-06 11:13:35.959749] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.663 passed 00:18:38.663 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-06 11:13:36.037371] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.663 [2024-10-06 11:13:36.121067] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:38.663 [2024-10-06 11:13:36.137073] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:38.663 [2024-10-06 11:13:36.142148] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.663 passed 00:18:38.663 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-06 11:13:36.217732] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.663 [2024-10-06 11:13:36.218966] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:38.663 [2024-10-06 11:13:36.220752] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.922 passed 00:18:38.922 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-06 11:13:36.298380] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.922 [2024-10-06 11:13:36.375069] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:38.922 [2024-10-06 11:13:36.399064] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:38.922 [2024-10-06 11:13:36.404150] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.922 passed 00:18:38.922 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-06 11:13:36.477909] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.922 [2024-10-06 11:13:36.479144] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:38.922 [2024-10-06 11:13:36.479172] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:38.922 [2024-10-06 11:13:36.480927] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.182 passed 00:18:39.182 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-06 11:13:36.562293] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.182 [2024-10-06 11:13:36.655070] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:39.182 [2024-10-06 11:13:36.663065] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:39.182 [2024-10-06 11:13:36.671070] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:39.182 [2024-10-06 11:13:36.679065] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:39.182 [2024-10-06 11:13:36.708150] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.182 passed 00:18:39.442 Test: admin_create_io_sq_verify_pc ...[2024-10-06 11:13:36.783922] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.442 [2024-10-06 11:13:36.800077] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:39.442 [2024-10-06 11:13:36.815082] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.442 passed 00:18:39.442 Test: admin_create_io_qp_max_qps ...[2024-10-06 11:13:36.891597] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.821 [2024-10-06 11:13:37.985069] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:18:40.821 [2024-10-06 11:13:38.371636] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.080 passed 00:18:41.080 Test: admin_create_io_sq_shared_cq ...[2024-10-06 11:13:38.452377] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.080 [2024-10-06 11:13:38.585078] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:41.080 [2024-10-06 11:13:38.622133] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.080 passed 00:18:41.080 00:18:41.080 Run Summary: Type Total Ran Passed Failed Inactive 00:18:41.080 suites 1 1 n/a 0 0 00:18:41.080 tests 18 18 18 0 0 00:18:41.080 asserts 360 360 360 0 n/a 00:18:41.080 00:18:41.080 Elapsed time = 1.506 seconds 00:18:41.340 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2042630 00:18:41.340 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 2042630 ']' 00:18:41.340 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 2042630 00:18:41.340 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:18:41.340 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:41.340 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2042630 00:18:41.340 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:41.340 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:41.340 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2042630' 00:18:41.340 killing process with pid 2042630 00:18:41.340 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 2042630 00:18:41.340 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 2042630 00:18:41.340 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:41.340 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:41.600 00:18:41.600 real 0m5.583s 00:18:41.600 user 0m15.664s 00:18:41.600 sys 0m0.509s 00:18:41.600 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:41.600 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:41.600 ************************************ 00:18:41.600 END TEST nvmf_vfio_user_nvme_compliance 00:18:41.600 ************************************ 00:18:41.600 11:13:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:41.600 11:13:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:41.600 11:13:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:41.600 11:13:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:41.600 ************************************ 00:18:41.600 START TEST nvmf_vfio_user_fuzz 00:18:41.600 ************************************ 00:18:41.600 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:41.600 * Looking for test storage... 00:18:41.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:41.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.600 --rc genhtml_branch_coverage=1 00:18:41.600 --rc genhtml_function_coverage=1 00:18:41.600 --rc genhtml_legend=1 00:18:41.600 --rc geninfo_all_blocks=1 00:18:41.600 --rc geninfo_unexecuted_blocks=1 00:18:41.600 00:18:41.600 ' 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:41.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.600 --rc genhtml_branch_coverage=1 00:18:41.600 --rc genhtml_function_coverage=1 00:18:41.600 --rc genhtml_legend=1 00:18:41.600 --rc geninfo_all_blocks=1 00:18:41.600 --rc geninfo_unexecuted_blocks=1 00:18:41.600 00:18:41.600 ' 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:41.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.600 --rc genhtml_branch_coverage=1 00:18:41.600 --rc genhtml_function_coverage=1 00:18:41.600 --rc genhtml_legend=1 00:18:41.600 --rc geninfo_all_blocks=1 00:18:41.600 --rc geninfo_unexecuted_blocks=1 00:18:41.600 00:18:41.600 ' 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:41.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.600 --rc genhtml_branch_coverage=1 00:18:41.600 --rc genhtml_function_coverage=1 00:18:41.600 --rc genhtml_legend=1 00:18:41.600 --rc geninfo_all_blocks=1 00:18:41.600 --rc geninfo_unexecuted_blocks=1 00:18:41.600 00:18:41.600 ' 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.600 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:41.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2043600 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2043600' 00:18:41.601 Process pid: 2043600 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2043600 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2043600 ']' 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:41.601 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:41.861 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:41.861 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:18:41.861 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:43.241 malloc0 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:43.241 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:15.338 Fuzzing completed. Shutting down the fuzz application 00:19:15.338 00:19:15.338 Dumping successful admin opcodes: 00:19:15.338 8, 9, 10, 24, 00:19:15.338 Dumping successful io opcodes: 00:19:15.338 0, 00:19:15.338 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1026703, total successful commands: 4043, random_seed: 3563364352 00:19:15.338 NS: 0x200003a1ef00 admin qp, Total commands completed: 254101, total successful commands: 2051, random_seed: 4124811648 00:19:15.338 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:15.338 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.338 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:15.338 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.338 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2043600 00:19:15.338 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2043600 ']' 00:19:15.338 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 2043600 00:19:15.338 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:19:15.338 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:15.338 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2043600 00:19:15.338 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:15.338 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:15.338 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2043600' 00:19:15.338 killing process with pid 2043600 00:19:15.338 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 2043600 00:19:15.338 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 2043600 00:19:15.338 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:15.338 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:15.338 00:19:15.338 real 0m32.176s 00:19:15.338 user 0m29.889s 00:19:15.338 sys 0m30.872s 00:19:15.338 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:15.338 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:15.338 ************************************ 00:19:15.338 END TEST nvmf_vfio_user_fuzz 00:19:15.338 ************************************ 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:15.339 ************************************ 00:19:15.339 START TEST nvmf_auth_target 00:19:15.339 ************************************ 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:15.339 * Looking for test storage... 00:19:15.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:15.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.339 --rc genhtml_branch_coverage=1 00:19:15.339 --rc genhtml_function_coverage=1 00:19:15.339 --rc genhtml_legend=1 00:19:15.339 --rc geninfo_all_blocks=1 00:19:15.339 --rc geninfo_unexecuted_blocks=1 00:19:15.339 00:19:15.339 ' 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:15.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.339 --rc genhtml_branch_coverage=1 00:19:15.339 --rc genhtml_function_coverage=1 00:19:15.339 --rc genhtml_legend=1 00:19:15.339 --rc geninfo_all_blocks=1 00:19:15.339 --rc geninfo_unexecuted_blocks=1 00:19:15.339 00:19:15.339 ' 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:15.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.339 --rc genhtml_branch_coverage=1 00:19:15.339 --rc genhtml_function_coverage=1 00:19:15.339 --rc genhtml_legend=1 00:19:15.339 --rc geninfo_all_blocks=1 00:19:15.339 --rc geninfo_unexecuted_blocks=1 00:19:15.339 00:19:15.339 ' 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:15.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.339 --rc genhtml_branch_coverage=1 00:19:15.339 --rc genhtml_function_coverage=1 00:19:15.339 --rc genhtml_legend=1 00:19:15.339 --rc geninfo_all_blocks=1 00:19:15.339 --rc geninfo_unexecuted_blocks=1 00:19:15.339 00:19:15.339 ' 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.339 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:15.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:15.340 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:18.636 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:18.636 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:18.636 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:18.637 Found net devices under 0000:af:00.0: cvl_0_0 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:18.637 Found net devices under 0000:af:00.1: cvl_0_1 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:18.637 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:18.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:18.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:19:18.897 00:19:18.897 --- 10.0.0.2 ping statistics --- 00:19:18.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.897 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:18.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:18.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:19:18.897 00:19:18.897 --- 10.0.0.1 ping statistics --- 00:19:18.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.897 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=2051696 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 2051696 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2051696 ']' 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.897 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2051719 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=5f5ce2bf8855105ee86d298b3261e70ceee12c115c0aadc2 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.GaU 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 5f5ce2bf8855105ee86d298b3261e70ceee12c115c0aadc2 0 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 5f5ce2bf8855105ee86d298b3261e70ceee12c115c0aadc2 0 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=5f5ce2bf8855105ee86d298b3261e70ceee12c115c0aadc2 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.GaU 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.GaU 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.GaU 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=7f21edb77ee0883eeb0ab002a1b18f0e6e34bc60b8eb73b59261f43c3caf7e84 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.xgX 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 7f21edb77ee0883eeb0ab002a1b18f0e6e34bc60b8eb73b59261f43c3caf7e84 3 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 7f21edb77ee0883eeb0ab002a1b18f0e6e34bc60b8eb73b59261f43c3caf7e84 3 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=7f21edb77ee0883eeb0ab002a1b18f0e6e34bc60b8eb73b59261f43c3caf7e84 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:19:19.158 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.418 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.xgX 00:19:19.418 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.xgX 00:19:19.418 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.xgX 00:19:19.418 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:19.418 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.418 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.418 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.418 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:19:19.418 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:19:19.418 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:19.418 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=cb184e8df87cbf63ce1a96017bf69aab 00:19:19.418 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:19:19.418 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.RZo 00:19:19.418 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key cb184e8df87cbf63ce1a96017bf69aab 1 00:19:19.418 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 cb184e8df87cbf63ce1a96017bf69aab 1 00:19:19.418 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=cb184e8df87cbf63ce1a96017bf69aab 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.RZo 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.RZo 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.RZo 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=3cbf50f752434441243c6882b3ade81086df2496dad7f05f 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Elr 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 3cbf50f752434441243c6882b3ade81086df2496dad7f05f 2 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 3cbf50f752434441243c6882b3ade81086df2496dad7f05f 2 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=3cbf50f752434441243c6882b3ade81086df2496dad7f05f 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Elr 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Elr 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Elr 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=c552827c1b1db203f9c8637dc3a8b0354509b8fe7e57f021 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.5dl 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key c552827c1b1db203f9c8637dc3a8b0354509b8fe7e57f021 2 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 c552827c1b1db203f9c8637dc3a8b0354509b8fe7e57f021 2 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=c552827c1b1db203f9c8637dc3a8b0354509b8fe7e57f021 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.5dl 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.5dl 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.5dl 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=d20588334ee3203fbd89dda10bcf4bd8 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.iN9 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key d20588334ee3203fbd89dda10bcf4bd8 1 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 d20588334ee3203fbd89dda10bcf4bd8 1 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=d20588334ee3203fbd89dda10bcf4bd8 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:19:19.419 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.iN9 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.iN9 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.iN9 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=51ce84c48988286cca3bdfe3797a75508e4f2ac19c0cb1a04c96c0d3e4aa52ff 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.Svg 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 51ce84c48988286cca3bdfe3797a75508e4f2ac19c0cb1a04c96c0d3e4aa52ff 3 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 51ce84c48988286cca3bdfe3797a75508e4f2ac19c0cb1a04c96c0d3e4aa52ff 3 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=51ce84c48988286cca3bdfe3797a75508e4f2ac19c0cb1a04c96c0d3e4aa52ff 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.Svg 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.Svg 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Svg 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2051696 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2051696 ']' 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:19.679 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2051719 /var/tmp/host.sock 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2051719 ']' 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:19.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GaU 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.GaU 00:19:19.940 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.GaU 00:19:20.200 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.xgX ]] 00:19:20.200 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xgX 00:19:20.200 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.200 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.200 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.200 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xgX 00:19:20.200 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xgX 00:19:20.459 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:20.459 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.RZo 00:19:20.459 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.459 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.459 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.459 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.RZo 00:19:20.459 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.RZo 00:19:20.723 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Elr ]] 00:19:20.723 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Elr 00:19:20.723 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.723 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.723 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.723 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Elr 00:19:20.723 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Elr 00:19:20.983 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:20.983 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5dl 00:19:20.983 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.983 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.983 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.983 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.5dl 00:19:20.983 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.5dl 00:19:20.983 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.iN9 ]] 00:19:20.983 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iN9 00:19:20.983 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.983 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.983 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.983 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iN9 00:19:20.983 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iN9 00:19:21.243 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:21.243 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Svg 00:19:21.243 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.243 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.243 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.243 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Svg 00:19:21.243 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Svg 00:19:21.502 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:21.502 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:21.502 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.502 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.502 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:21.502 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:21.502 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:21.502 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.502 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:21.502 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:21.502 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:21.502 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.502 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.502 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.502 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.502 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.502 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.502 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.502 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.761 00:19:21.761 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.761 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.761 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.020 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.020 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.020 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.020 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.020 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.020 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.020 { 00:19:22.020 "cntlid": 1, 00:19:22.020 "qid": 0, 00:19:22.020 "state": "enabled", 00:19:22.020 "thread": "nvmf_tgt_poll_group_000", 00:19:22.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:22.020 "listen_address": { 00:19:22.020 "trtype": "TCP", 00:19:22.020 "adrfam": "IPv4", 00:19:22.020 "traddr": "10.0.0.2", 00:19:22.020 "trsvcid": "4420" 00:19:22.020 }, 00:19:22.020 "peer_address": { 00:19:22.020 "trtype": "TCP", 00:19:22.020 "adrfam": "IPv4", 00:19:22.020 "traddr": "10.0.0.1", 00:19:22.020 "trsvcid": "33920" 00:19:22.020 }, 00:19:22.020 "auth": { 00:19:22.020 "state": "completed", 00:19:22.020 "digest": "sha256", 00:19:22.020 "dhgroup": "null" 00:19:22.020 } 00:19:22.020 } 00:19:22.020 ]' 00:19:22.020 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.020 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.020 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.279 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:22.280 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.280 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.280 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.280 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.280 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:19:22.280 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:19:22.847 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.107 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.367 00:19:23.367 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.367 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.367 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.626 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.626 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.626 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.626 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.626 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.626 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.626 { 00:19:23.626 "cntlid": 3, 00:19:23.626 "qid": 0, 00:19:23.626 "state": "enabled", 00:19:23.626 "thread": "nvmf_tgt_poll_group_000", 00:19:23.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:23.626 "listen_address": { 00:19:23.626 "trtype": "TCP", 00:19:23.626 "adrfam": "IPv4", 00:19:23.626 "traddr": "10.0.0.2", 00:19:23.626 "trsvcid": "4420" 00:19:23.626 }, 00:19:23.626 "peer_address": { 00:19:23.626 "trtype": "TCP", 00:19:23.626 "adrfam": "IPv4", 00:19:23.626 "traddr": "10.0.0.1", 00:19:23.626 "trsvcid": "33940" 00:19:23.626 }, 00:19:23.626 "auth": { 00:19:23.626 "state": "completed", 00:19:23.626 "digest": "sha256", 00:19:23.626 "dhgroup": "null" 00:19:23.626 } 00:19:23.626 } 00:19:23.626 ]' 00:19:23.626 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.626 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.626 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.626 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:23.626 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.886 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.886 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.886 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.886 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:19:23.886 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:19:24.454 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.454 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:24.454 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.454 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.454 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.454 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.454 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:24.454 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:24.712 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:24.712 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.712 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:24.712 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:24.712 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:24.712 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.712 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.712 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.712 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.712 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.712 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.712 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.712 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.970 00:19:24.970 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.970 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.970 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.227 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.227 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.227 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.227 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.227 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.227 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.227 { 00:19:25.227 "cntlid": 5, 00:19:25.227 "qid": 0, 00:19:25.227 "state": "enabled", 00:19:25.227 "thread": "nvmf_tgt_poll_group_000", 00:19:25.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:25.227 "listen_address": { 00:19:25.227 "trtype": "TCP", 00:19:25.227 "adrfam": "IPv4", 00:19:25.227 "traddr": "10.0.0.2", 00:19:25.227 "trsvcid": "4420" 00:19:25.227 }, 00:19:25.227 "peer_address": { 00:19:25.227 "trtype": "TCP", 00:19:25.227 "adrfam": "IPv4", 00:19:25.227 "traddr": "10.0.0.1", 00:19:25.227 "trsvcid": "33964" 00:19:25.227 }, 00:19:25.227 "auth": { 00:19:25.227 "state": "completed", 00:19:25.227 "digest": "sha256", 00:19:25.227 "dhgroup": "null" 00:19:25.227 } 00:19:25.227 } 00:19:25.227 ]' 00:19:25.227 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.227 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.227 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.227 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:25.227 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.227 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.227 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.227 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.486 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:19:25.486 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:19:26.052 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.052 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:26.052 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.052 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.052 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.052 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.053 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:26.053 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:26.312 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:26.312 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.312 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:26.312 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:26.312 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:26.312 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.312 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:26.312 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.312 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.312 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.312 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:26.312 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.312 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.571 00:19:26.571 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.571 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.571 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.831 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.831 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.831 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.831 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.831 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.831 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.831 { 00:19:26.831 "cntlid": 7, 00:19:26.831 "qid": 0, 00:19:26.831 "state": "enabled", 00:19:26.831 "thread": "nvmf_tgt_poll_group_000", 00:19:26.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:26.831 "listen_address": { 00:19:26.831 "trtype": "TCP", 00:19:26.831 "adrfam": "IPv4", 00:19:26.831 "traddr": "10.0.0.2", 00:19:26.831 "trsvcid": "4420" 00:19:26.831 }, 00:19:26.831 "peer_address": { 00:19:26.831 "trtype": "TCP", 00:19:26.831 "adrfam": "IPv4", 00:19:26.831 "traddr": "10.0.0.1", 00:19:26.831 "trsvcid": "33976" 00:19:26.831 }, 00:19:26.831 "auth": { 00:19:26.831 "state": "completed", 00:19:26.831 "digest": "sha256", 00:19:26.831 "dhgroup": "null" 00:19:26.831 } 00:19:26.831 } 00:19:26.831 ]' 00:19:26.831 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.831 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.831 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.831 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:26.831 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.831 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.831 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.831 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.091 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:19:27.091 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:19:27.660 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.660 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:27.660 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.660 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.660 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.660 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.660 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.660 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.660 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.919 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:27.919 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.919 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:27.919 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:27.919 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:27.919 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.919 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.919 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.919 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.919 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.919 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.919 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.919 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.178 00:19:28.178 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.178 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.178 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.178 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.178 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.178 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.178 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.178 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.178 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.178 { 00:19:28.178 "cntlid": 9, 00:19:28.178 "qid": 0, 00:19:28.178 "state": "enabled", 00:19:28.178 "thread": "nvmf_tgt_poll_group_000", 00:19:28.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:28.178 "listen_address": { 00:19:28.178 "trtype": "TCP", 00:19:28.178 "adrfam": "IPv4", 00:19:28.178 "traddr": "10.0.0.2", 00:19:28.178 "trsvcid": "4420" 00:19:28.178 }, 00:19:28.178 "peer_address": { 00:19:28.178 "trtype": "TCP", 00:19:28.178 "adrfam": "IPv4", 00:19:28.178 "traddr": "10.0.0.1", 00:19:28.178 "trsvcid": "34002" 00:19:28.178 }, 00:19:28.178 "auth": { 00:19:28.178 "state": "completed", 00:19:28.178 "digest": "sha256", 00:19:28.178 "dhgroup": "ffdhe2048" 00:19:28.178 } 00:19:28.178 } 00:19:28.178 ]' 00:19:28.178 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.437 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.437 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.438 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:28.438 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.438 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.438 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.438 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.697 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:19:28.697 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.266 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.526 00:19:29.526 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.526 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.526 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.786 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.786 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.786 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.786 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.786 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.786 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.786 { 00:19:29.786 "cntlid": 11, 00:19:29.786 "qid": 0, 00:19:29.786 "state": "enabled", 00:19:29.786 "thread": "nvmf_tgt_poll_group_000", 00:19:29.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:29.786 "listen_address": { 00:19:29.786 "trtype": "TCP", 00:19:29.786 "adrfam": "IPv4", 00:19:29.786 "traddr": "10.0.0.2", 00:19:29.786 "trsvcid": "4420" 00:19:29.786 }, 00:19:29.786 "peer_address": { 00:19:29.786 "trtype": "TCP", 00:19:29.786 "adrfam": "IPv4", 00:19:29.786 "traddr": "10.0.0.1", 00:19:29.786 "trsvcid": "34032" 00:19:29.786 }, 00:19:29.786 "auth": { 00:19:29.786 "state": "completed", 00:19:29.786 "digest": "sha256", 00:19:29.786 "dhgroup": "ffdhe2048" 00:19:29.786 } 00:19:29.786 } 00:19:29.787 ]' 00:19:29.787 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.787 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.787 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.787 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:29.787 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.046 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.046 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.046 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.046 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:19:30.046 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:19:30.616 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.616 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:30.616 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.616 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.616 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.616 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.616 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.616 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.876 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:30.876 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.876 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:30.876 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:30.876 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:30.876 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.876 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.876 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.876 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.876 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.876 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.876 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.876 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.135 00:19:31.135 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.135 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.135 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.394 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.394 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.394 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.394 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.394 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.394 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.394 { 00:19:31.394 "cntlid": 13, 00:19:31.394 "qid": 0, 00:19:31.394 "state": "enabled", 00:19:31.394 "thread": "nvmf_tgt_poll_group_000", 00:19:31.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:31.394 "listen_address": { 00:19:31.394 "trtype": "TCP", 00:19:31.394 "adrfam": "IPv4", 00:19:31.394 "traddr": "10.0.0.2", 00:19:31.394 "trsvcid": "4420" 00:19:31.394 }, 00:19:31.394 "peer_address": { 00:19:31.394 "trtype": "TCP", 00:19:31.394 "adrfam": "IPv4", 00:19:31.394 "traddr": "10.0.0.1", 00:19:31.394 "trsvcid": "34266" 00:19:31.394 }, 00:19:31.394 "auth": { 00:19:31.394 "state": "completed", 00:19:31.394 "digest": "sha256", 00:19:31.394 "dhgroup": "ffdhe2048" 00:19:31.394 } 00:19:31.394 } 00:19:31.394 ]' 00:19:31.394 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.394 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.394 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.394 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:31.394 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.394 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.394 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.394 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.654 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:19:31.654 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:19:32.222 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.222 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:32.222 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.222 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.222 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.222 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.222 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:32.222 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:32.483 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:32.483 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.483 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.483 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:32.483 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:32.483 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.483 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:32.483 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.483 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.483 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.483 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:32.483 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.483 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.743 00:19:32.743 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.743 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.743 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.003 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.003 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.003 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.003 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.003 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.003 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.003 { 00:19:33.003 "cntlid": 15, 00:19:33.003 "qid": 0, 00:19:33.003 "state": "enabled", 00:19:33.003 "thread": "nvmf_tgt_poll_group_000", 00:19:33.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:33.003 "listen_address": { 00:19:33.003 "trtype": "TCP", 00:19:33.003 "adrfam": "IPv4", 00:19:33.003 "traddr": "10.0.0.2", 00:19:33.003 "trsvcid": "4420" 00:19:33.003 }, 00:19:33.003 "peer_address": { 00:19:33.003 "trtype": "TCP", 00:19:33.003 "adrfam": "IPv4", 00:19:33.003 "traddr": "10.0.0.1", 00:19:33.003 "trsvcid": "34286" 00:19:33.003 }, 00:19:33.003 "auth": { 00:19:33.003 "state": "completed", 00:19:33.003 "digest": "sha256", 00:19:33.003 "dhgroup": "ffdhe2048" 00:19:33.003 } 00:19:33.003 } 00:19:33.003 ]' 00:19:33.003 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.003 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.003 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.003 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:33.003 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.003 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.003 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.003 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.262 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:19:33.262 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.859 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.146 00:19:34.146 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.146 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.146 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.443 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.443 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.443 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.443 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.443 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.443 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.443 { 00:19:34.443 "cntlid": 17, 00:19:34.443 "qid": 0, 00:19:34.443 "state": "enabled", 00:19:34.443 "thread": "nvmf_tgt_poll_group_000", 00:19:34.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:34.443 "listen_address": { 00:19:34.443 "trtype": "TCP", 00:19:34.443 "adrfam": "IPv4", 00:19:34.443 "traddr": "10.0.0.2", 00:19:34.443 "trsvcid": "4420" 00:19:34.443 }, 00:19:34.443 "peer_address": { 00:19:34.443 "trtype": "TCP", 00:19:34.443 "adrfam": "IPv4", 00:19:34.443 "traddr": "10.0.0.1", 00:19:34.443 "trsvcid": "34312" 00:19:34.443 }, 00:19:34.443 "auth": { 00:19:34.443 "state": "completed", 00:19:34.443 "digest": "sha256", 00:19:34.443 "dhgroup": "ffdhe3072" 00:19:34.443 } 00:19:34.443 } 00:19:34.443 ]' 00:19:34.443 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.443 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.443 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.443 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:34.443 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.443 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.443 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.443 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.702 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:19:34.702 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:19:35.271 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.271 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:35.271 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.271 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.271 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.271 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.271 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.271 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.530 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:35.530 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.530 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:35.530 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:35.530 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:35.530 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.530 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.530 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.530 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.530 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.530 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.530 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.530 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.789 00:19:35.789 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.789 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.789 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.048 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.048 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.048 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.048 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.048 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.048 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.048 { 00:19:36.048 "cntlid": 19, 00:19:36.048 "qid": 0, 00:19:36.048 "state": "enabled", 00:19:36.048 "thread": "nvmf_tgt_poll_group_000", 00:19:36.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:36.048 "listen_address": { 00:19:36.048 "trtype": "TCP", 00:19:36.049 "adrfam": "IPv4", 00:19:36.049 "traddr": "10.0.0.2", 00:19:36.049 "trsvcid": "4420" 00:19:36.049 }, 00:19:36.049 "peer_address": { 00:19:36.049 "trtype": "TCP", 00:19:36.049 "adrfam": "IPv4", 00:19:36.049 "traddr": "10.0.0.1", 00:19:36.049 "trsvcid": "34330" 00:19:36.049 }, 00:19:36.049 "auth": { 00:19:36.049 "state": "completed", 00:19:36.049 "digest": "sha256", 00:19:36.049 "dhgroup": "ffdhe3072" 00:19:36.049 } 00:19:36.049 } 00:19:36.049 ]' 00:19:36.049 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.049 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.049 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.049 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:36.049 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.049 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.049 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.049 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.307 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:19:36.307 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:19:36.876 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.876 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:36.876 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.876 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.876 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.876 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.876 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:36.876 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:37.135 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:37.135 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.135 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.135 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:37.135 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:37.135 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.135 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.136 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.136 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.136 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.136 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.136 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.136 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.395 00:19:37.395 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.395 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.395 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.655 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.655 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.655 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.655 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.655 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.655 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.655 { 00:19:37.655 "cntlid": 21, 00:19:37.655 "qid": 0, 00:19:37.655 "state": "enabled", 00:19:37.655 "thread": "nvmf_tgt_poll_group_000", 00:19:37.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:37.655 "listen_address": { 00:19:37.655 "trtype": "TCP", 00:19:37.655 "adrfam": "IPv4", 00:19:37.655 "traddr": "10.0.0.2", 00:19:37.655 "trsvcid": "4420" 00:19:37.655 }, 00:19:37.655 "peer_address": { 00:19:37.655 "trtype": "TCP", 00:19:37.655 "adrfam": "IPv4", 00:19:37.655 "traddr": "10.0.0.1", 00:19:37.655 "trsvcid": "34354" 00:19:37.655 }, 00:19:37.655 "auth": { 00:19:37.655 "state": "completed", 00:19:37.655 "digest": "sha256", 00:19:37.655 "dhgroup": "ffdhe3072" 00:19:37.655 } 00:19:37.655 } 00:19:37.655 ]' 00:19:37.655 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.655 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.655 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.655 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:37.655 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.655 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.655 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.655 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.914 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:19:37.914 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:19:38.482 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.482 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:38.482 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.482 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.482 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.482 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.482 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.482 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.741 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:38.741 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.741 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:38.741 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:38.741 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:38.741 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.741 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:38.742 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.742 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.742 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.742 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:38.742 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:38.742 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.001 00:19:39.001 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.001 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.001 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.260 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.260 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.260 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.260 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.260 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.260 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.260 { 00:19:39.260 "cntlid": 23, 00:19:39.260 "qid": 0, 00:19:39.260 "state": "enabled", 00:19:39.260 "thread": "nvmf_tgt_poll_group_000", 00:19:39.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:39.260 "listen_address": { 00:19:39.260 "trtype": "TCP", 00:19:39.260 "adrfam": "IPv4", 00:19:39.260 "traddr": "10.0.0.2", 00:19:39.260 "trsvcid": "4420" 00:19:39.260 }, 00:19:39.260 "peer_address": { 00:19:39.260 "trtype": "TCP", 00:19:39.260 "adrfam": "IPv4", 00:19:39.260 "traddr": "10.0.0.1", 00:19:39.260 "trsvcid": "34384" 00:19:39.260 }, 00:19:39.260 "auth": { 00:19:39.260 "state": "completed", 00:19:39.260 "digest": "sha256", 00:19:39.260 "dhgroup": "ffdhe3072" 00:19:39.260 } 00:19:39.260 } 00:19:39.260 ]' 00:19:39.260 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.260 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.260 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.260 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:39.260 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.260 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.260 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.260 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.519 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:19:39.519 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:19:40.086 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.086 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:40.086 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.086 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.086 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.086 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.086 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.086 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.086 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.344 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:40.344 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.345 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:40.345 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:40.345 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:40.345 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.345 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.345 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.345 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.345 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.345 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.345 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.345 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.603 00:19:40.603 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.603 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.603 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.603 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.603 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.603 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.603 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.863 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.863 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.863 { 00:19:40.863 "cntlid": 25, 00:19:40.863 "qid": 0, 00:19:40.863 "state": "enabled", 00:19:40.863 "thread": "nvmf_tgt_poll_group_000", 00:19:40.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:40.863 "listen_address": { 00:19:40.863 "trtype": "TCP", 00:19:40.863 "adrfam": "IPv4", 00:19:40.863 "traddr": "10.0.0.2", 00:19:40.863 "trsvcid": "4420" 00:19:40.863 }, 00:19:40.863 "peer_address": { 00:19:40.863 "trtype": "TCP", 00:19:40.863 "adrfam": "IPv4", 00:19:40.863 "traddr": "10.0.0.1", 00:19:40.863 "trsvcid": "46610" 00:19:40.863 }, 00:19:40.863 "auth": { 00:19:40.863 "state": "completed", 00:19:40.863 "digest": "sha256", 00:19:40.863 "dhgroup": "ffdhe4096" 00:19:40.863 } 00:19:40.863 } 00:19:40.863 ]' 00:19:40.863 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.863 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.863 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.863 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:40.863 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.863 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.863 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.863 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.122 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:19:41.122 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:19:41.690 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.690 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:41.690 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.690 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.690 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.690 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.690 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:41.690 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:41.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:41.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:41.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:41.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.209 00:19:42.209 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.209 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.209 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.209 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.209 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.209 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.209 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.209 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.209 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.209 { 00:19:42.209 "cntlid": 27, 00:19:42.209 "qid": 0, 00:19:42.209 "state": "enabled", 00:19:42.209 "thread": "nvmf_tgt_poll_group_000", 00:19:42.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:42.209 "listen_address": { 00:19:42.209 "trtype": "TCP", 00:19:42.209 "adrfam": "IPv4", 00:19:42.209 "traddr": "10.0.0.2", 00:19:42.209 "trsvcid": "4420" 00:19:42.209 }, 00:19:42.209 "peer_address": { 00:19:42.209 "trtype": "TCP", 00:19:42.209 "adrfam": "IPv4", 00:19:42.209 "traddr": "10.0.0.1", 00:19:42.209 "trsvcid": "46624" 00:19:42.209 }, 00:19:42.209 "auth": { 00:19:42.209 "state": "completed", 00:19:42.209 "digest": "sha256", 00:19:42.209 "dhgroup": "ffdhe4096" 00:19:42.209 } 00:19:42.209 } 00:19:42.209 ]' 00:19:42.209 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.468 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.468 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.468 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:42.468 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.468 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.468 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.468 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.727 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:19:42.727 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:19:43.296 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.296 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:43.296 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.296 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.296 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.296 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.296 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:43.296 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:43.296 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:43.296 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.296 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.296 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:43.296 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:43.296 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.296 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.296 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.296 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.556 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.556 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.556 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.556 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.815 00:19:43.815 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.815 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.815 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.815 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.815 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.815 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.815 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.815 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.815 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.815 { 00:19:43.815 "cntlid": 29, 00:19:43.815 "qid": 0, 00:19:43.815 "state": "enabled", 00:19:43.815 "thread": "nvmf_tgt_poll_group_000", 00:19:43.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:43.815 "listen_address": { 00:19:43.815 "trtype": "TCP", 00:19:43.815 "adrfam": "IPv4", 00:19:43.815 "traddr": "10.0.0.2", 00:19:43.815 "trsvcid": "4420" 00:19:43.815 }, 00:19:43.815 "peer_address": { 00:19:43.815 "trtype": "TCP", 00:19:43.815 "adrfam": "IPv4", 00:19:43.815 "traddr": "10.0.0.1", 00:19:43.815 "trsvcid": "46656" 00:19:43.815 }, 00:19:43.815 "auth": { 00:19:43.815 "state": "completed", 00:19:43.815 "digest": "sha256", 00:19:43.815 "dhgroup": "ffdhe4096" 00:19:43.815 } 00:19:43.815 } 00:19:43.815 ]' 00:19:43.815 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.074 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.074 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.075 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:44.075 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.075 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.075 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.075 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.334 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:19:44.334 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:44.902 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:45.161 00:19:45.161 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.161 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.161 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.421 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.421 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.421 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.421 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.421 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.421 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.421 { 00:19:45.421 "cntlid": 31, 00:19:45.421 "qid": 0, 00:19:45.421 "state": "enabled", 00:19:45.421 "thread": "nvmf_tgt_poll_group_000", 00:19:45.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:45.421 "listen_address": { 00:19:45.421 "trtype": "TCP", 00:19:45.421 "adrfam": "IPv4", 00:19:45.421 "traddr": "10.0.0.2", 00:19:45.421 "trsvcid": "4420" 00:19:45.421 }, 00:19:45.421 "peer_address": { 00:19:45.421 "trtype": "TCP", 00:19:45.421 "adrfam": "IPv4", 00:19:45.421 "traddr": "10.0.0.1", 00:19:45.421 "trsvcid": "46688" 00:19:45.421 }, 00:19:45.421 "auth": { 00:19:45.421 "state": "completed", 00:19:45.421 "digest": "sha256", 00:19:45.421 "dhgroup": "ffdhe4096" 00:19:45.421 } 00:19:45.421 } 00:19:45.421 ]' 00:19:45.421 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.421 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.421 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.680 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.680 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.680 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.680 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.680 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.680 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:19:45.680 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:19:46.248 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.249 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:46.249 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.249 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.508 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.508 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.508 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.508 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:46.508 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:46.508 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:46.508 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.508 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.508 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:46.508 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:46.508 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.508 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.508 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.508 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.508 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.508 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.508 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.508 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.077 00:19:47.077 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.077 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.077 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.077 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.077 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.077 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.077 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.077 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.077 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.077 { 00:19:47.077 "cntlid": 33, 00:19:47.077 "qid": 0, 00:19:47.077 "state": "enabled", 00:19:47.077 "thread": "nvmf_tgt_poll_group_000", 00:19:47.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:47.077 "listen_address": { 00:19:47.077 "trtype": "TCP", 00:19:47.077 "adrfam": "IPv4", 00:19:47.077 "traddr": "10.0.0.2", 00:19:47.077 "trsvcid": "4420" 00:19:47.077 }, 00:19:47.077 "peer_address": { 00:19:47.077 "trtype": "TCP", 00:19:47.077 "adrfam": "IPv4", 00:19:47.077 "traddr": "10.0.0.1", 00:19:47.077 "trsvcid": "46708" 00:19:47.077 }, 00:19:47.077 "auth": { 00:19:47.077 "state": "completed", 00:19:47.077 "digest": "sha256", 00:19:47.077 "dhgroup": "ffdhe6144" 00:19:47.077 } 00:19:47.077 } 00:19:47.077 ]' 00:19:47.077 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.077 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.077 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.077 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:47.077 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.337 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.337 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.337 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.338 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:19:47.338 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:19:47.911 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.911 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:47.911 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.911 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.911 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.171 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.171 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.171 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.171 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:48.171 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.171 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.171 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:48.171 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:48.171 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.171 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.171 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.171 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.171 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.171 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.171 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.171 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.740 00:19:48.740 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.740 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.740 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.740 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.740 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.740 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.740 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.740 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.740 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.740 { 00:19:48.740 "cntlid": 35, 00:19:48.740 "qid": 0, 00:19:48.740 "state": "enabled", 00:19:48.740 "thread": "nvmf_tgt_poll_group_000", 00:19:48.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:48.740 "listen_address": { 00:19:48.740 "trtype": "TCP", 00:19:48.740 "adrfam": "IPv4", 00:19:48.740 "traddr": "10.0.0.2", 00:19:48.740 "trsvcid": "4420" 00:19:48.741 }, 00:19:48.741 "peer_address": { 00:19:48.741 "trtype": "TCP", 00:19:48.741 "adrfam": "IPv4", 00:19:48.741 "traddr": "10.0.0.1", 00:19:48.741 "trsvcid": "46744" 00:19:48.741 }, 00:19:48.741 "auth": { 00:19:48.741 "state": "completed", 00:19:48.741 "digest": "sha256", 00:19:48.741 "dhgroup": "ffdhe6144" 00:19:48.741 } 00:19:48.741 } 00:19:48.741 ]' 00:19:48.741 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.741 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.741 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.000 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.000 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.000 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.000 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.000 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.000 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:19:49.000 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:19:49.601 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.601 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:49.601 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.601 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.601 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.601 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.601 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.601 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.860 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:49.860 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.860 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.860 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:49.860 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:49.860 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.860 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.860 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.860 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.860 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.860 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.860 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.860 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.119 00:19:50.119 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.119 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.119 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.378 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.378 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.378 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.378 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.378 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.378 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.378 { 00:19:50.378 "cntlid": 37, 00:19:50.378 "qid": 0, 00:19:50.378 "state": "enabled", 00:19:50.378 "thread": "nvmf_tgt_poll_group_000", 00:19:50.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:50.378 "listen_address": { 00:19:50.378 "trtype": "TCP", 00:19:50.378 "adrfam": "IPv4", 00:19:50.378 "traddr": "10.0.0.2", 00:19:50.378 "trsvcid": "4420" 00:19:50.378 }, 00:19:50.378 "peer_address": { 00:19:50.378 "trtype": "TCP", 00:19:50.378 "adrfam": "IPv4", 00:19:50.378 "traddr": "10.0.0.1", 00:19:50.378 "trsvcid": "44004" 00:19:50.378 }, 00:19:50.378 "auth": { 00:19:50.378 "state": "completed", 00:19:50.378 "digest": "sha256", 00:19:50.378 "dhgroup": "ffdhe6144" 00:19:50.378 } 00:19:50.378 } 00:19:50.378 ]' 00:19:50.378 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.378 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.378 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.638 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:50.638 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.638 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.638 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.638 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.638 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:19:50.638 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:19:51.205 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.205 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:51.205 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.205 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.205 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.205 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.205 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:51.205 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:51.464 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:51.464 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.464 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.464 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:51.464 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:51.464 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.464 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:51.464 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.464 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.464 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.464 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:51.464 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.464 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.723 00:19:51.723 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.723 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.723 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.982 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.982 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.982 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.982 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.982 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.982 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.982 { 00:19:51.982 "cntlid": 39, 00:19:51.982 "qid": 0, 00:19:51.982 "state": "enabled", 00:19:51.982 "thread": "nvmf_tgt_poll_group_000", 00:19:51.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:51.982 "listen_address": { 00:19:51.982 "trtype": "TCP", 00:19:51.982 "adrfam": "IPv4", 00:19:51.982 "traddr": "10.0.0.2", 00:19:51.982 "trsvcid": "4420" 00:19:51.982 }, 00:19:51.982 "peer_address": { 00:19:51.982 "trtype": "TCP", 00:19:51.982 "adrfam": "IPv4", 00:19:51.982 "traddr": "10.0.0.1", 00:19:51.982 "trsvcid": "44032" 00:19:51.982 }, 00:19:51.982 "auth": { 00:19:51.982 "state": "completed", 00:19:51.982 "digest": "sha256", 00:19:51.982 "dhgroup": "ffdhe6144" 00:19:51.982 } 00:19:51.982 } 00:19:51.982 ]' 00:19:51.982 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.982 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.982 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.242 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:52.242 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.242 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.242 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.242 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.242 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:19:52.242 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:19:52.810 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.810 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:52.810 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.810 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.810 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.810 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.810 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.810 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:52.810 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:53.070 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:53.070 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.070 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.070 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:53.070 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:53.070 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.070 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.070 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.070 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.070 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.070 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.070 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.070 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.636 00:19:53.636 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.636 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.636 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.895 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.895 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.895 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.895 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.895 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.895 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.895 { 00:19:53.895 "cntlid": 41, 00:19:53.895 "qid": 0, 00:19:53.895 "state": "enabled", 00:19:53.895 "thread": "nvmf_tgt_poll_group_000", 00:19:53.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:53.895 "listen_address": { 00:19:53.895 "trtype": "TCP", 00:19:53.895 "adrfam": "IPv4", 00:19:53.895 "traddr": "10.0.0.2", 00:19:53.895 "trsvcid": "4420" 00:19:53.895 }, 00:19:53.895 "peer_address": { 00:19:53.895 "trtype": "TCP", 00:19:53.895 "adrfam": "IPv4", 00:19:53.895 "traddr": "10.0.0.1", 00:19:53.895 "trsvcid": "44060" 00:19:53.895 }, 00:19:53.895 "auth": { 00:19:53.895 "state": "completed", 00:19:53.895 "digest": "sha256", 00:19:53.895 "dhgroup": "ffdhe8192" 00:19:53.895 } 00:19:53.895 } 00:19:53.895 ]' 00:19:53.895 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.895 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.895 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.895 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:53.895 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.895 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.895 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.895 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.154 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:19:54.154 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:19:54.723 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.723 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:54.723 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.723 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.723 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.723 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.723 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:54.723 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:54.982 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:54.982 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.982 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.982 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:54.982 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:54.982 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.982 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.982 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.982 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.982 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.982 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.982 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.982 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.551 00:19:55.551 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.551 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.551 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.551 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.551 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.551 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.551 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.551 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.551 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.551 { 00:19:55.551 "cntlid": 43, 00:19:55.551 "qid": 0, 00:19:55.551 "state": "enabled", 00:19:55.551 "thread": "nvmf_tgt_poll_group_000", 00:19:55.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:55.551 "listen_address": { 00:19:55.551 "trtype": "TCP", 00:19:55.551 "adrfam": "IPv4", 00:19:55.551 "traddr": "10.0.0.2", 00:19:55.551 "trsvcid": "4420" 00:19:55.551 }, 00:19:55.551 "peer_address": { 00:19:55.551 "trtype": "TCP", 00:19:55.551 "adrfam": "IPv4", 00:19:55.551 "traddr": "10.0.0.1", 00:19:55.551 "trsvcid": "44080" 00:19:55.551 }, 00:19:55.551 "auth": { 00:19:55.551 "state": "completed", 00:19:55.551 "digest": "sha256", 00:19:55.551 "dhgroup": "ffdhe8192" 00:19:55.551 } 00:19:55.551 } 00:19:55.551 ]' 00:19:55.551 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.551 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.551 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.551 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:55.551 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.810 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.810 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.810 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.810 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:19:55.810 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:19:56.377 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.377 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:56.377 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.377 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.377 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.377 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.377 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:56.377 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:56.636 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:56.636 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.636 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.636 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:56.636 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:56.636 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.636 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.636 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.636 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.636 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.636 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.636 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.636 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.205 00:19:57.205 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.205 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.205 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.464 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.464 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.464 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.464 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.464 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.464 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.464 { 00:19:57.464 "cntlid": 45, 00:19:57.464 "qid": 0, 00:19:57.464 "state": "enabled", 00:19:57.464 "thread": "nvmf_tgt_poll_group_000", 00:19:57.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:57.464 "listen_address": { 00:19:57.464 "trtype": "TCP", 00:19:57.464 "adrfam": "IPv4", 00:19:57.464 "traddr": "10.0.0.2", 00:19:57.464 "trsvcid": "4420" 00:19:57.464 }, 00:19:57.464 "peer_address": { 00:19:57.464 "trtype": "TCP", 00:19:57.464 "adrfam": "IPv4", 00:19:57.464 "traddr": "10.0.0.1", 00:19:57.464 "trsvcid": "44092" 00:19:57.464 }, 00:19:57.464 "auth": { 00:19:57.464 "state": "completed", 00:19:57.464 "digest": "sha256", 00:19:57.464 "dhgroup": "ffdhe8192" 00:19:57.464 } 00:19:57.464 } 00:19:57.464 ]' 00:19:57.464 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.464 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.464 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.464 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:57.464 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.464 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.464 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.464 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.723 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:19:57.723 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:19:58.291 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.291 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:58.291 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.291 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.291 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.291 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.291 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.291 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.551 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:58.551 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.551 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.551 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:58.551 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:58.551 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.551 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:58.551 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.551 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.551 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.551 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:58.551 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.551 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.810 00:19:59.069 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.069 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.069 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.069 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.069 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.069 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.069 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.069 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.069 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.069 { 00:19:59.069 "cntlid": 47, 00:19:59.069 "qid": 0, 00:19:59.069 "state": "enabled", 00:19:59.069 "thread": "nvmf_tgt_poll_group_000", 00:19:59.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:59.069 "listen_address": { 00:19:59.069 "trtype": "TCP", 00:19:59.069 "adrfam": "IPv4", 00:19:59.069 "traddr": "10.0.0.2", 00:19:59.069 "trsvcid": "4420" 00:19:59.069 }, 00:19:59.069 "peer_address": { 00:19:59.069 "trtype": "TCP", 00:19:59.069 "adrfam": "IPv4", 00:19:59.069 "traddr": "10.0.0.1", 00:19:59.069 "trsvcid": "44124" 00:19:59.069 }, 00:19:59.069 "auth": { 00:19:59.069 "state": "completed", 00:19:59.069 "digest": "sha256", 00:19:59.069 "dhgroup": "ffdhe8192" 00:19:59.069 } 00:19:59.069 } 00:19:59.069 ]' 00:19:59.069 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.328 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.328 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.328 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.328 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.328 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.328 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.328 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.587 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:19:59.587 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:00.155 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.155 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:00.155 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.155 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.155 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.155 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:00.155 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.155 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.155 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.155 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.155 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:00.155 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.155 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:00.156 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:00.156 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:00.156 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.156 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.156 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.156 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.156 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.156 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.156 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.156 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.415 00:20:00.415 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.415 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.415 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.674 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.674 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.674 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.674 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.674 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.674 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.674 { 00:20:00.674 "cntlid": 49, 00:20:00.674 "qid": 0, 00:20:00.674 "state": "enabled", 00:20:00.674 "thread": "nvmf_tgt_poll_group_000", 00:20:00.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:00.674 "listen_address": { 00:20:00.674 "trtype": "TCP", 00:20:00.674 "adrfam": "IPv4", 00:20:00.674 "traddr": "10.0.0.2", 00:20:00.674 "trsvcid": "4420" 00:20:00.674 }, 00:20:00.674 "peer_address": { 00:20:00.674 "trtype": "TCP", 00:20:00.674 "adrfam": "IPv4", 00:20:00.674 "traddr": "10.0.0.1", 00:20:00.674 "trsvcid": "43154" 00:20:00.674 }, 00:20:00.674 "auth": { 00:20:00.674 "state": "completed", 00:20:00.674 "digest": "sha384", 00:20:00.674 "dhgroup": "null" 00:20:00.674 } 00:20:00.674 } 00:20:00.674 ]' 00:20:00.674 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.674 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.674 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.674 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:00.674 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.938 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.938 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.938 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.938 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:00.938 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:01.508 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.508 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:01.508 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.508 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.508 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.508 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.508 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:01.508 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:01.767 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:01.767 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.767 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:01.767 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:01.767 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:01.767 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.767 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.767 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.767 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.767 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.767 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.767 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.767 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.026 00:20:02.026 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.026 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.026 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.286 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.286 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.286 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.286 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.286 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.286 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.286 { 00:20:02.286 "cntlid": 51, 00:20:02.286 "qid": 0, 00:20:02.286 "state": "enabled", 00:20:02.286 "thread": "nvmf_tgt_poll_group_000", 00:20:02.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:02.286 "listen_address": { 00:20:02.286 "trtype": "TCP", 00:20:02.286 "adrfam": "IPv4", 00:20:02.286 "traddr": "10.0.0.2", 00:20:02.286 "trsvcid": "4420" 00:20:02.286 }, 00:20:02.286 "peer_address": { 00:20:02.286 "trtype": "TCP", 00:20:02.286 "adrfam": "IPv4", 00:20:02.286 "traddr": "10.0.0.1", 00:20:02.286 "trsvcid": "43178" 00:20:02.286 }, 00:20:02.286 "auth": { 00:20:02.286 "state": "completed", 00:20:02.286 "digest": "sha384", 00:20:02.286 "dhgroup": "null" 00:20:02.286 } 00:20:02.286 } 00:20:02.286 ]' 00:20:02.286 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.286 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.286 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.286 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:02.286 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.286 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.286 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.286 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.545 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:20:02.545 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:20:03.114 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.114 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:03.114 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.114 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.115 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.115 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.115 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:03.115 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:03.374 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:03.374 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.374 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:03.374 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:03.374 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:03.374 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.374 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.374 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.374 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.374 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.374 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.374 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.374 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.633 00:20:03.633 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.633 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.633 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.892 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.892 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.892 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.892 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.892 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.892 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.892 { 00:20:03.892 "cntlid": 53, 00:20:03.893 "qid": 0, 00:20:03.893 "state": "enabled", 00:20:03.893 "thread": "nvmf_tgt_poll_group_000", 00:20:03.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:03.893 "listen_address": { 00:20:03.893 "trtype": "TCP", 00:20:03.893 "adrfam": "IPv4", 00:20:03.893 "traddr": "10.0.0.2", 00:20:03.893 "trsvcid": "4420" 00:20:03.893 }, 00:20:03.893 "peer_address": { 00:20:03.893 "trtype": "TCP", 00:20:03.893 "adrfam": "IPv4", 00:20:03.893 "traddr": "10.0.0.1", 00:20:03.893 "trsvcid": "43206" 00:20:03.893 }, 00:20:03.893 "auth": { 00:20:03.893 "state": "completed", 00:20:03.893 "digest": "sha384", 00:20:03.893 "dhgroup": "null" 00:20:03.893 } 00:20:03.893 } 00:20:03.893 ]' 00:20:03.893 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.893 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.893 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.893 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:03.893 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.893 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.893 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.893 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.153 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:20:04.153 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:20:04.721 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.721 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:04.721 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.721 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.721 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.721 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.721 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:04.721 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:04.981 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:04.981 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.981 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:04.981 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:04.981 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:04.981 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.981 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:04.981 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.981 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.981 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.981 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:04.981 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.981 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.240 00:20:05.240 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.240 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.240 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.240 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.240 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.240 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.240 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.240 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.240 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.240 { 00:20:05.240 "cntlid": 55, 00:20:05.240 "qid": 0, 00:20:05.240 "state": "enabled", 00:20:05.240 "thread": "nvmf_tgt_poll_group_000", 00:20:05.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:05.240 "listen_address": { 00:20:05.240 "trtype": "TCP", 00:20:05.240 "adrfam": "IPv4", 00:20:05.240 "traddr": "10.0.0.2", 00:20:05.240 "trsvcid": "4420" 00:20:05.240 }, 00:20:05.240 "peer_address": { 00:20:05.240 "trtype": "TCP", 00:20:05.240 "adrfam": "IPv4", 00:20:05.240 "traddr": "10.0.0.1", 00:20:05.240 "trsvcid": "43228" 00:20:05.240 }, 00:20:05.240 "auth": { 00:20:05.240 "state": "completed", 00:20:05.240 "digest": "sha384", 00:20:05.240 "dhgroup": "null" 00:20:05.240 } 00:20:05.240 } 00:20:05.240 ]' 00:20:05.240 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.500 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.500 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.500 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:05.500 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.500 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.500 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.500 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.759 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:05.759 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:06.328 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.328 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:06.328 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.328 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.328 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.328 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.328 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.328 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:06.328 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:06.328 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:06.328 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.328 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:06.328 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:06.328 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:06.328 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.329 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.329 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.329 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.329 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.329 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.329 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.329 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.588 00:20:06.588 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.588 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.588 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.847 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.847 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.847 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.847 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.847 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.847 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.847 { 00:20:06.847 "cntlid": 57, 00:20:06.847 "qid": 0, 00:20:06.847 "state": "enabled", 00:20:06.847 "thread": "nvmf_tgt_poll_group_000", 00:20:06.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:06.847 "listen_address": { 00:20:06.847 "trtype": "TCP", 00:20:06.847 "adrfam": "IPv4", 00:20:06.847 "traddr": "10.0.0.2", 00:20:06.847 "trsvcid": "4420" 00:20:06.847 }, 00:20:06.847 "peer_address": { 00:20:06.847 "trtype": "TCP", 00:20:06.847 "adrfam": "IPv4", 00:20:06.847 "traddr": "10.0.0.1", 00:20:06.847 "trsvcid": "43258" 00:20:06.847 }, 00:20:06.847 "auth": { 00:20:06.847 "state": "completed", 00:20:06.847 "digest": "sha384", 00:20:06.847 "dhgroup": "ffdhe2048" 00:20:06.847 } 00:20:06.847 } 00:20:06.847 ]' 00:20:06.847 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.847 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.847 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.106 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:07.106 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.106 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.106 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.106 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.106 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:07.106 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:07.674 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.674 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:07.674 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.674 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.674 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.674 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.674 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.674 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.933 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:07.934 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.934 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:07.934 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:07.934 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:07.934 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.934 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.934 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.934 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.934 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.934 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.934 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.934 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.193 00:20:08.193 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.193 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.193 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.452 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.452 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.452 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.452 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.452 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.452 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.452 { 00:20:08.452 "cntlid": 59, 00:20:08.452 "qid": 0, 00:20:08.452 "state": "enabled", 00:20:08.452 "thread": "nvmf_tgt_poll_group_000", 00:20:08.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:08.452 "listen_address": { 00:20:08.452 "trtype": "TCP", 00:20:08.452 "adrfam": "IPv4", 00:20:08.452 "traddr": "10.0.0.2", 00:20:08.452 "trsvcid": "4420" 00:20:08.452 }, 00:20:08.452 "peer_address": { 00:20:08.452 "trtype": "TCP", 00:20:08.452 "adrfam": "IPv4", 00:20:08.452 "traddr": "10.0.0.1", 00:20:08.452 "trsvcid": "43276" 00:20:08.452 }, 00:20:08.452 "auth": { 00:20:08.452 "state": "completed", 00:20:08.452 "digest": "sha384", 00:20:08.452 "dhgroup": "ffdhe2048" 00:20:08.452 } 00:20:08.452 } 00:20:08.452 ]' 00:20:08.452 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.452 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.452 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.452 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:08.452 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.452 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.452 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.452 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.712 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:20:08.712 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:20:09.280 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.280 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:09.280 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.280 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.280 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.280 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.280 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.280 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.539 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:09.539 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.539 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:09.539 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:09.539 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:09.539 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.539 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.539 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.539 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.539 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.539 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.539 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.539 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.798 00:20:09.798 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.798 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.798 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.798 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.798 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.798 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.798 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.058 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.058 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.058 { 00:20:10.058 "cntlid": 61, 00:20:10.058 "qid": 0, 00:20:10.058 "state": "enabled", 00:20:10.058 "thread": "nvmf_tgt_poll_group_000", 00:20:10.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:10.058 "listen_address": { 00:20:10.058 "trtype": "TCP", 00:20:10.058 "adrfam": "IPv4", 00:20:10.058 "traddr": "10.0.0.2", 00:20:10.058 "trsvcid": "4420" 00:20:10.058 }, 00:20:10.058 "peer_address": { 00:20:10.058 "trtype": "TCP", 00:20:10.058 "adrfam": "IPv4", 00:20:10.058 "traddr": "10.0.0.1", 00:20:10.058 "trsvcid": "43308" 00:20:10.058 }, 00:20:10.058 "auth": { 00:20:10.058 "state": "completed", 00:20:10.058 "digest": "sha384", 00:20:10.058 "dhgroup": "ffdhe2048" 00:20:10.058 } 00:20:10.058 } 00:20:10.058 ]' 00:20:10.058 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.058 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.058 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.058 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:10.058 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.058 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.058 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.058 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.317 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:20:10.317 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:20:10.886 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.886 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:10.886 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.886 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.886 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.886 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.886 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:10.886 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:11.144 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:11.144 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.144 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.144 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:11.144 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:11.144 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.144 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:11.144 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.144 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.144 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.144 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.144 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.144 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.404 00:20:11.404 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.404 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.404 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.666 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.666 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.666 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.666 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.666 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.666 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.666 { 00:20:11.666 "cntlid": 63, 00:20:11.666 "qid": 0, 00:20:11.666 "state": "enabled", 00:20:11.666 "thread": "nvmf_tgt_poll_group_000", 00:20:11.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:11.666 "listen_address": { 00:20:11.666 "trtype": "TCP", 00:20:11.666 "adrfam": "IPv4", 00:20:11.666 "traddr": "10.0.0.2", 00:20:11.666 "trsvcid": "4420" 00:20:11.666 }, 00:20:11.666 "peer_address": { 00:20:11.666 "trtype": "TCP", 00:20:11.666 "adrfam": "IPv4", 00:20:11.666 "traddr": "10.0.0.1", 00:20:11.666 "trsvcid": "51172" 00:20:11.666 }, 00:20:11.666 "auth": { 00:20:11.666 "state": "completed", 00:20:11.666 "digest": "sha384", 00:20:11.666 "dhgroup": "ffdhe2048" 00:20:11.666 } 00:20:11.666 } 00:20:11.666 ]' 00:20:11.666 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.666 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.666 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.666 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:11.666 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.667 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.667 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.667 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.974 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:11.974 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:12.616 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.616 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:12.616 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.616 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.616 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.616 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.616 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.616 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:12.616 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:12.616 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:12.616 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.616 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:12.616 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:12.616 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:12.617 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.617 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.617 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.617 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.617 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.617 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.617 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.617 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.875 00:20:12.875 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.875 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.875 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.133 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.133 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.133 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.133 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.133 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.133 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.133 { 00:20:13.133 "cntlid": 65, 00:20:13.133 "qid": 0, 00:20:13.133 "state": "enabled", 00:20:13.133 "thread": "nvmf_tgt_poll_group_000", 00:20:13.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:13.133 "listen_address": { 00:20:13.133 "trtype": "TCP", 00:20:13.133 "adrfam": "IPv4", 00:20:13.133 "traddr": "10.0.0.2", 00:20:13.133 "trsvcid": "4420" 00:20:13.134 }, 00:20:13.134 "peer_address": { 00:20:13.134 "trtype": "TCP", 00:20:13.134 "adrfam": "IPv4", 00:20:13.134 "traddr": "10.0.0.1", 00:20:13.134 "trsvcid": "51196" 00:20:13.134 }, 00:20:13.134 "auth": { 00:20:13.134 "state": "completed", 00:20:13.134 "digest": "sha384", 00:20:13.134 "dhgroup": "ffdhe3072" 00:20:13.134 } 00:20:13.134 } 00:20:13.134 ]' 00:20:13.134 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.134 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.134 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.134 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:13.134 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.393 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.393 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.393 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.393 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:13.393 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:13.960 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.960 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:13.960 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.960 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.960 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.960 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.960 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:13.960 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.220 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:14.220 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.220 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.220 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:14.220 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:14.220 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.220 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.220 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.220 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.220 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.220 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.220 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.220 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.480 00:20:14.480 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.480 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.480 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.739 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.739 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.739 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.739 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.739 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.739 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.739 { 00:20:14.739 "cntlid": 67, 00:20:14.739 "qid": 0, 00:20:14.739 "state": "enabled", 00:20:14.739 "thread": "nvmf_tgt_poll_group_000", 00:20:14.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:14.739 "listen_address": { 00:20:14.739 "trtype": "TCP", 00:20:14.739 "adrfam": "IPv4", 00:20:14.739 "traddr": "10.0.0.2", 00:20:14.739 "trsvcid": "4420" 00:20:14.739 }, 00:20:14.739 "peer_address": { 00:20:14.739 "trtype": "TCP", 00:20:14.739 "adrfam": "IPv4", 00:20:14.739 "traddr": "10.0.0.1", 00:20:14.739 "trsvcid": "51220" 00:20:14.739 }, 00:20:14.739 "auth": { 00:20:14.739 "state": "completed", 00:20:14.739 "digest": "sha384", 00:20:14.739 "dhgroup": "ffdhe3072" 00:20:14.739 } 00:20:14.739 } 00:20:14.739 ]' 00:20:14.739 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.739 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.739 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.739 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:14.739 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.998 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.998 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.998 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.998 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:20:14.998 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:20:15.567 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.567 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:15.567 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.567 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.567 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.567 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.567 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:15.568 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:15.827 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:15.827 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.827 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:15.827 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:15.827 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:15.827 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.827 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.827 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.827 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.827 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.827 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.827 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.827 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.086 00:20:16.086 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.086 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.086 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.345 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.345 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.345 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.345 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.345 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.345 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.345 { 00:20:16.345 "cntlid": 69, 00:20:16.345 "qid": 0, 00:20:16.345 "state": "enabled", 00:20:16.345 "thread": "nvmf_tgt_poll_group_000", 00:20:16.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:16.345 "listen_address": { 00:20:16.345 "trtype": "TCP", 00:20:16.345 "adrfam": "IPv4", 00:20:16.345 "traddr": "10.0.0.2", 00:20:16.345 "trsvcid": "4420" 00:20:16.345 }, 00:20:16.345 "peer_address": { 00:20:16.345 "trtype": "TCP", 00:20:16.345 "adrfam": "IPv4", 00:20:16.345 "traddr": "10.0.0.1", 00:20:16.345 "trsvcid": "51248" 00:20:16.345 }, 00:20:16.345 "auth": { 00:20:16.345 "state": "completed", 00:20:16.345 "digest": "sha384", 00:20:16.345 "dhgroup": "ffdhe3072" 00:20:16.345 } 00:20:16.345 } 00:20:16.345 ]' 00:20:16.345 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.345 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.345 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.345 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:16.345 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.345 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.345 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.345 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.603 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:20:16.603 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:20:17.171 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.171 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:17.171 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.171 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.171 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.171 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.171 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:17.171 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:17.430 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:17.430 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.430 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:17.430 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:17.430 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:17.430 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.430 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:17.430 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.430 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.430 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.430 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:17.430 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.430 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.689 00:20:17.689 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.689 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.689 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.948 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.948 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.948 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.948 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.948 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.948 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.948 { 00:20:17.948 "cntlid": 71, 00:20:17.948 "qid": 0, 00:20:17.948 "state": "enabled", 00:20:17.948 "thread": "nvmf_tgt_poll_group_000", 00:20:17.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:17.948 "listen_address": { 00:20:17.948 "trtype": "TCP", 00:20:17.948 "adrfam": "IPv4", 00:20:17.948 "traddr": "10.0.0.2", 00:20:17.948 "trsvcid": "4420" 00:20:17.948 }, 00:20:17.948 "peer_address": { 00:20:17.948 "trtype": "TCP", 00:20:17.948 "adrfam": "IPv4", 00:20:17.948 "traddr": "10.0.0.1", 00:20:17.948 "trsvcid": "51280" 00:20:17.948 }, 00:20:17.948 "auth": { 00:20:17.948 "state": "completed", 00:20:17.948 "digest": "sha384", 00:20:17.948 "dhgroup": "ffdhe3072" 00:20:17.948 } 00:20:17.948 } 00:20:17.948 ]' 00:20:17.948 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.949 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.949 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.949 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:17.949 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.949 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.949 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.949 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.208 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:18.208 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:18.777 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.777 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:18.777 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.777 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.777 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.777 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.777 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.777 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:18.777 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:19.036 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:19.036 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.036 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.036 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:19.036 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:19.036 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.036 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.036 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.036 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.036 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.036 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.036 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.036 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.296 00:20:19.296 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.296 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.296 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.556 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.556 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.556 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.556 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.556 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.556 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.556 { 00:20:19.556 "cntlid": 73, 00:20:19.556 "qid": 0, 00:20:19.556 "state": "enabled", 00:20:19.556 "thread": "nvmf_tgt_poll_group_000", 00:20:19.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:19.556 "listen_address": { 00:20:19.556 "trtype": "TCP", 00:20:19.556 "adrfam": "IPv4", 00:20:19.556 "traddr": "10.0.0.2", 00:20:19.556 "trsvcid": "4420" 00:20:19.556 }, 00:20:19.556 "peer_address": { 00:20:19.556 "trtype": "TCP", 00:20:19.556 "adrfam": "IPv4", 00:20:19.556 "traddr": "10.0.0.1", 00:20:19.556 "trsvcid": "51302" 00:20:19.556 }, 00:20:19.556 "auth": { 00:20:19.556 "state": "completed", 00:20:19.556 "digest": "sha384", 00:20:19.556 "dhgroup": "ffdhe4096" 00:20:19.556 } 00:20:19.556 } 00:20:19.556 ]' 00:20:19.556 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.556 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.556 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.556 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:19.556 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.556 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.556 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.556 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.815 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:19.815 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:20.385 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.385 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.385 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.385 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.385 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.385 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.385 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:20.385 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:20.644 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:20.644 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.644 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.644 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:20.644 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:20.644 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.644 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.644 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.644 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.644 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.644 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.644 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.644 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.908 00:20:20.908 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.908 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.908 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.167 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.167 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.167 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.167 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.167 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.167 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.167 { 00:20:21.167 "cntlid": 75, 00:20:21.167 "qid": 0, 00:20:21.167 "state": "enabled", 00:20:21.167 "thread": "nvmf_tgt_poll_group_000", 00:20:21.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:21.167 "listen_address": { 00:20:21.167 "trtype": "TCP", 00:20:21.167 "adrfam": "IPv4", 00:20:21.167 "traddr": "10.0.0.2", 00:20:21.167 "trsvcid": "4420" 00:20:21.167 }, 00:20:21.167 "peer_address": { 00:20:21.167 "trtype": "TCP", 00:20:21.167 "adrfam": "IPv4", 00:20:21.167 "traddr": "10.0.0.1", 00:20:21.167 "trsvcid": "51186" 00:20:21.167 }, 00:20:21.167 "auth": { 00:20:21.167 "state": "completed", 00:20:21.167 "digest": "sha384", 00:20:21.167 "dhgroup": "ffdhe4096" 00:20:21.167 } 00:20:21.167 } 00:20:21.167 ]' 00:20:21.167 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.167 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.167 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.167 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:21.167 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.167 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.167 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.167 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.425 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:20:21.425 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:20:21.992 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.992 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:21.992 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.992 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.992 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.992 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.992 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:21.992 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:22.251 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:22.251 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.251 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.251 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:22.251 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:22.251 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.251 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.251 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.251 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.251 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.252 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.252 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.252 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.510 00:20:22.510 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.510 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.510 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.769 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.769 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.769 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.769 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.769 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.769 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.769 { 00:20:22.769 "cntlid": 77, 00:20:22.769 "qid": 0, 00:20:22.769 "state": "enabled", 00:20:22.769 "thread": "nvmf_tgt_poll_group_000", 00:20:22.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:22.769 "listen_address": { 00:20:22.769 "trtype": "TCP", 00:20:22.769 "adrfam": "IPv4", 00:20:22.769 "traddr": "10.0.0.2", 00:20:22.769 "trsvcid": "4420" 00:20:22.769 }, 00:20:22.769 "peer_address": { 00:20:22.769 "trtype": "TCP", 00:20:22.769 "adrfam": "IPv4", 00:20:22.769 "traddr": "10.0.0.1", 00:20:22.769 "trsvcid": "51198" 00:20:22.769 }, 00:20:22.769 "auth": { 00:20:22.769 "state": "completed", 00:20:22.769 "digest": "sha384", 00:20:22.769 "dhgroup": "ffdhe4096" 00:20:22.769 } 00:20:22.769 } 00:20:22.769 ]' 00:20:22.769 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.769 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.769 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.769 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:22.769 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.769 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.769 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.769 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.030 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:20:23.030 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:20:23.599 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.599 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:23.599 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.599 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.599 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.599 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.599 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:23.599 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:23.858 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:23.858 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.858 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.858 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:23.858 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:23.858 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.858 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:23.858 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.858 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.858 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.858 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:23.858 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.858 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.118 00:20:24.118 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.118 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.118 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.377 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.378 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.378 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.378 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.378 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.378 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.378 { 00:20:24.378 "cntlid": 79, 00:20:24.378 "qid": 0, 00:20:24.378 "state": "enabled", 00:20:24.378 "thread": "nvmf_tgt_poll_group_000", 00:20:24.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:24.378 "listen_address": { 00:20:24.378 "trtype": "TCP", 00:20:24.378 "adrfam": "IPv4", 00:20:24.378 "traddr": "10.0.0.2", 00:20:24.378 "trsvcid": "4420" 00:20:24.378 }, 00:20:24.378 "peer_address": { 00:20:24.378 "trtype": "TCP", 00:20:24.378 "adrfam": "IPv4", 00:20:24.378 "traddr": "10.0.0.1", 00:20:24.378 "trsvcid": "51230" 00:20:24.378 }, 00:20:24.378 "auth": { 00:20:24.378 "state": "completed", 00:20:24.378 "digest": "sha384", 00:20:24.378 "dhgroup": "ffdhe4096" 00:20:24.378 } 00:20:24.378 } 00:20:24.378 ]' 00:20:24.378 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.378 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.378 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.378 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:24.378 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.378 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.378 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.378 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.637 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:24.637 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:25.205 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.205 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:25.205 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.205 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.205 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.205 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.205 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.205 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:25.205 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:25.464 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:25.464 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.464 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.464 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:25.464 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:25.464 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.464 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.464 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.464 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.464 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.464 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.464 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.464 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.724 00:20:25.724 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.724 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.724 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.983 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.983 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.983 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.983 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.983 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.983 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.983 { 00:20:25.983 "cntlid": 81, 00:20:25.983 "qid": 0, 00:20:25.983 "state": "enabled", 00:20:25.983 "thread": "nvmf_tgt_poll_group_000", 00:20:25.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:25.983 "listen_address": { 00:20:25.983 "trtype": "TCP", 00:20:25.983 "adrfam": "IPv4", 00:20:25.983 "traddr": "10.0.0.2", 00:20:25.983 "trsvcid": "4420" 00:20:25.983 }, 00:20:25.983 "peer_address": { 00:20:25.983 "trtype": "TCP", 00:20:25.983 "adrfam": "IPv4", 00:20:25.983 "traddr": "10.0.0.1", 00:20:25.983 "trsvcid": "51266" 00:20:25.983 }, 00:20:25.983 "auth": { 00:20:25.983 "state": "completed", 00:20:25.983 "digest": "sha384", 00:20:25.983 "dhgroup": "ffdhe6144" 00:20:25.983 } 00:20:25.983 } 00:20:25.983 ]' 00:20:25.983 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.983 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.983 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.983 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:25.983 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.983 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.983 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.983 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.242 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:26.242 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:26.811 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.811 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:26.811 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.811 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.811 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.811 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.811 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:26.811 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:27.070 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:27.070 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.070 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.070 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:27.070 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:27.070 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.070 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.070 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.070 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.070 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.070 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.070 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.070 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.330 00:20:27.330 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.330 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.330 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.589 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.589 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.589 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.589 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.589 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.589 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.589 { 00:20:27.589 "cntlid": 83, 00:20:27.589 "qid": 0, 00:20:27.589 "state": "enabled", 00:20:27.589 "thread": "nvmf_tgt_poll_group_000", 00:20:27.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:27.589 "listen_address": { 00:20:27.589 "trtype": "TCP", 00:20:27.589 "adrfam": "IPv4", 00:20:27.589 "traddr": "10.0.0.2", 00:20:27.589 "trsvcid": "4420" 00:20:27.589 }, 00:20:27.589 "peer_address": { 00:20:27.589 "trtype": "TCP", 00:20:27.589 "adrfam": "IPv4", 00:20:27.589 "traddr": "10.0.0.1", 00:20:27.589 "trsvcid": "51276" 00:20:27.589 }, 00:20:27.589 "auth": { 00:20:27.589 "state": "completed", 00:20:27.589 "digest": "sha384", 00:20:27.589 "dhgroup": "ffdhe6144" 00:20:27.589 } 00:20:27.589 } 00:20:27.589 ]' 00:20:27.589 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.589 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.589 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.847 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:27.847 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.847 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.847 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.847 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.847 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:20:27.847 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:20:28.414 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.414 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:28.414 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.414 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.414 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.414 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.414 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.414 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.673 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:28.673 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.673 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.673 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:28.673 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:28.673 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.673 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.673 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.673 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.673 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.673 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.673 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.673 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.241 00:20:29.241 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.241 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.241 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.242 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.242 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.242 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.242 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.242 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.242 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.242 { 00:20:29.242 "cntlid": 85, 00:20:29.242 "qid": 0, 00:20:29.242 "state": "enabled", 00:20:29.242 "thread": "nvmf_tgt_poll_group_000", 00:20:29.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:29.242 "listen_address": { 00:20:29.242 "trtype": "TCP", 00:20:29.242 "adrfam": "IPv4", 00:20:29.242 "traddr": "10.0.0.2", 00:20:29.242 "trsvcid": "4420" 00:20:29.242 }, 00:20:29.242 "peer_address": { 00:20:29.242 "trtype": "TCP", 00:20:29.242 "adrfam": "IPv4", 00:20:29.242 "traddr": "10.0.0.1", 00:20:29.242 "trsvcid": "51304" 00:20:29.242 }, 00:20:29.242 "auth": { 00:20:29.242 "state": "completed", 00:20:29.242 "digest": "sha384", 00:20:29.242 "dhgroup": "ffdhe6144" 00:20:29.242 } 00:20:29.242 } 00:20:29.242 ]' 00:20:29.242 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.242 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.242 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.500 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.500 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.500 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.501 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.501 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.501 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:20:29.501 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:20:30.068 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.068 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:30.068 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.068 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.068 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.069 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.069 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:30.069 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:30.328 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:30.328 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.328 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.328 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:30.328 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:30.328 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.328 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:30.328 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.328 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.328 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.328 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:30.328 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.328 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.896 00:20:30.896 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.896 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.896 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.896 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.896 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.896 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.896 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.896 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.896 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.896 { 00:20:30.896 "cntlid": 87, 00:20:30.896 "qid": 0, 00:20:30.896 "state": "enabled", 00:20:30.896 "thread": "nvmf_tgt_poll_group_000", 00:20:30.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:30.896 "listen_address": { 00:20:30.896 "trtype": "TCP", 00:20:30.896 "adrfam": "IPv4", 00:20:30.896 "traddr": "10.0.0.2", 00:20:30.896 "trsvcid": "4420" 00:20:30.896 }, 00:20:30.896 "peer_address": { 00:20:30.896 "trtype": "TCP", 00:20:30.896 "adrfam": "IPv4", 00:20:30.896 "traddr": "10.0.0.1", 00:20:30.896 "trsvcid": "48854" 00:20:30.896 }, 00:20:30.896 "auth": { 00:20:30.896 "state": "completed", 00:20:30.896 "digest": "sha384", 00:20:30.896 "dhgroup": "ffdhe6144" 00:20:30.896 } 00:20:30.896 } 00:20:30.896 ]' 00:20:30.896 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.896 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.896 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.154 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:31.154 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.154 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.154 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.154 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.154 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:31.154 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:31.722 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.722 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:31.722 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.722 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.722 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.722 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.722 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.722 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:31.722 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:31.981 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:31.981 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.981 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.981 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:31.981 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:31.981 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.981 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.981 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.981 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.981 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.981 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.981 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.981 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.550 00:20:32.550 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.550 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.550 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.809 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.809 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.809 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.809 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.809 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.809 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.809 { 00:20:32.809 "cntlid": 89, 00:20:32.809 "qid": 0, 00:20:32.809 "state": "enabled", 00:20:32.809 "thread": "nvmf_tgt_poll_group_000", 00:20:32.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:32.809 "listen_address": { 00:20:32.809 "trtype": "TCP", 00:20:32.809 "adrfam": "IPv4", 00:20:32.809 "traddr": "10.0.0.2", 00:20:32.809 "trsvcid": "4420" 00:20:32.809 }, 00:20:32.809 "peer_address": { 00:20:32.809 "trtype": "TCP", 00:20:32.809 "adrfam": "IPv4", 00:20:32.809 "traddr": "10.0.0.1", 00:20:32.809 "trsvcid": "48898" 00:20:32.809 }, 00:20:32.809 "auth": { 00:20:32.809 "state": "completed", 00:20:32.809 "digest": "sha384", 00:20:32.809 "dhgroup": "ffdhe8192" 00:20:32.809 } 00:20:32.809 } 00:20:32.809 ]' 00:20:32.809 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.809 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.809 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.809 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:32.809 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.809 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.809 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.809 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.069 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:33.069 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:33.638 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.639 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:33.639 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.639 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.639 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.639 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.639 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:33.639 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:33.898 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:33.898 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.898 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.898 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:33.898 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:33.898 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.898 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.898 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.898 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.898 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.898 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.898 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.898 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.467 00:20:34.467 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.467 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.467 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.467 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.467 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.467 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.467 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.467 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.467 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.467 { 00:20:34.467 "cntlid": 91, 00:20:34.467 "qid": 0, 00:20:34.467 "state": "enabled", 00:20:34.467 "thread": "nvmf_tgt_poll_group_000", 00:20:34.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:34.467 "listen_address": { 00:20:34.467 "trtype": "TCP", 00:20:34.467 "adrfam": "IPv4", 00:20:34.467 "traddr": "10.0.0.2", 00:20:34.467 "trsvcid": "4420" 00:20:34.467 }, 00:20:34.467 "peer_address": { 00:20:34.467 "trtype": "TCP", 00:20:34.467 "adrfam": "IPv4", 00:20:34.467 "traddr": "10.0.0.1", 00:20:34.467 "trsvcid": "48932" 00:20:34.467 }, 00:20:34.467 "auth": { 00:20:34.467 "state": "completed", 00:20:34.467 "digest": "sha384", 00:20:34.467 "dhgroup": "ffdhe8192" 00:20:34.467 } 00:20:34.467 } 00:20:34.467 ]' 00:20:34.467 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.467 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.467 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.727 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:34.727 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.727 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.727 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.727 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.987 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:20:34.987 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:20:35.555 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.555 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:35.555 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.555 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.555 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.555 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.555 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:35.555 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:35.555 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:35.555 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.555 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.555 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:35.555 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:35.556 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.556 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.556 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.556 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.556 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.556 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.556 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.556 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.123 00:20:36.123 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.123 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.123 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.382 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.383 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.383 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.383 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.383 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.383 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.383 { 00:20:36.383 "cntlid": 93, 00:20:36.383 "qid": 0, 00:20:36.383 "state": "enabled", 00:20:36.383 "thread": "nvmf_tgt_poll_group_000", 00:20:36.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:36.383 "listen_address": { 00:20:36.383 "trtype": "TCP", 00:20:36.383 "adrfam": "IPv4", 00:20:36.383 "traddr": "10.0.0.2", 00:20:36.383 "trsvcid": "4420" 00:20:36.383 }, 00:20:36.383 "peer_address": { 00:20:36.383 "trtype": "TCP", 00:20:36.383 "adrfam": "IPv4", 00:20:36.383 "traddr": "10.0.0.1", 00:20:36.383 "trsvcid": "48974" 00:20:36.383 }, 00:20:36.383 "auth": { 00:20:36.383 "state": "completed", 00:20:36.383 "digest": "sha384", 00:20:36.383 "dhgroup": "ffdhe8192" 00:20:36.383 } 00:20:36.383 } 00:20:36.383 ]' 00:20:36.383 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.383 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.383 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.383 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:36.383 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.383 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.383 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.383 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.642 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:20:36.642 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:20:37.210 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.210 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:37.210 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.210 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.210 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.210 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.210 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:37.210 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:37.470 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:37.470 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.470 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:37.470 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:37.470 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:37.470 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.470 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:37.470 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.470 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.470 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.470 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:37.470 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.470 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.039 00:20:38.039 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.039 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.039 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.039 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.039 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.039 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.039 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.039 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.039 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.039 { 00:20:38.039 "cntlid": 95, 00:20:38.039 "qid": 0, 00:20:38.039 "state": "enabled", 00:20:38.039 "thread": "nvmf_tgt_poll_group_000", 00:20:38.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:38.039 "listen_address": { 00:20:38.039 "trtype": "TCP", 00:20:38.039 "adrfam": "IPv4", 00:20:38.039 "traddr": "10.0.0.2", 00:20:38.039 "trsvcid": "4420" 00:20:38.039 }, 00:20:38.039 "peer_address": { 00:20:38.039 "trtype": "TCP", 00:20:38.039 "adrfam": "IPv4", 00:20:38.039 "traddr": "10.0.0.1", 00:20:38.039 "trsvcid": "48984" 00:20:38.039 }, 00:20:38.039 "auth": { 00:20:38.039 "state": "completed", 00:20:38.039 "digest": "sha384", 00:20:38.039 "dhgroup": "ffdhe8192" 00:20:38.039 } 00:20:38.039 } 00:20:38.039 ]' 00:20:38.039 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.039 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.039 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.298 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:38.298 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.298 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.298 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.298 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.298 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:38.298 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:38.866 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.866 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:38.866 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.866 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.866 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.866 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:38.866 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.866 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.866 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:38.866 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:39.125 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:39.125 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.125 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:39.125 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:39.125 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:39.125 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.125 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.125 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.125 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.125 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.125 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.125 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.125 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.384 00:20:39.384 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.384 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.384 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.643 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.643 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.643 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.643 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.643 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.643 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.643 { 00:20:39.643 "cntlid": 97, 00:20:39.643 "qid": 0, 00:20:39.643 "state": "enabled", 00:20:39.643 "thread": "nvmf_tgt_poll_group_000", 00:20:39.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:39.643 "listen_address": { 00:20:39.643 "trtype": "TCP", 00:20:39.643 "adrfam": "IPv4", 00:20:39.643 "traddr": "10.0.0.2", 00:20:39.643 "trsvcid": "4420" 00:20:39.643 }, 00:20:39.643 "peer_address": { 00:20:39.643 "trtype": "TCP", 00:20:39.643 "adrfam": "IPv4", 00:20:39.643 "traddr": "10.0.0.1", 00:20:39.643 "trsvcid": "49004" 00:20:39.643 }, 00:20:39.643 "auth": { 00:20:39.643 "state": "completed", 00:20:39.643 "digest": "sha512", 00:20:39.643 "dhgroup": "null" 00:20:39.643 } 00:20:39.643 } 00:20:39.643 ]' 00:20:39.643 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.643 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.643 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.643 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:39.643 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.643 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.643 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.643 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.901 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:39.901 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:40.465 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.465 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:40.465 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.465 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.465 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.465 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.465 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:40.466 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:40.725 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:40.725 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.725 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:40.725 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:40.725 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:40.725 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.725 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.725 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.725 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.725 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.725 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.725 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.725 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.996 00:20:40.996 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.996 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.996 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.260 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.260 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.260 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.260 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.260 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.260 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.260 { 00:20:41.260 "cntlid": 99, 00:20:41.260 "qid": 0, 00:20:41.260 "state": "enabled", 00:20:41.260 "thread": "nvmf_tgt_poll_group_000", 00:20:41.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:41.260 "listen_address": { 00:20:41.260 "trtype": "TCP", 00:20:41.260 "adrfam": "IPv4", 00:20:41.260 "traddr": "10.0.0.2", 00:20:41.260 "trsvcid": "4420" 00:20:41.260 }, 00:20:41.260 "peer_address": { 00:20:41.260 "trtype": "TCP", 00:20:41.260 "adrfam": "IPv4", 00:20:41.260 "traddr": "10.0.0.1", 00:20:41.260 "trsvcid": "38862" 00:20:41.260 }, 00:20:41.260 "auth": { 00:20:41.260 "state": "completed", 00:20:41.260 "digest": "sha512", 00:20:41.260 "dhgroup": "null" 00:20:41.260 } 00:20:41.260 } 00:20:41.260 ]' 00:20:41.260 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.260 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.260 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.260 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:41.260 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.260 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.260 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.260 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.519 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:20:41.519 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:20:42.087 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.087 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:42.087 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.087 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.087 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.087 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.087 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:42.087 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:42.346 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:42.346 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.346 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:42.346 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:42.346 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:42.346 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.346 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.346 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.346 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.346 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.346 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.346 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.346 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.605 00:20:42.605 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.605 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.605 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.864 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.864 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.864 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.864 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.864 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.864 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.864 { 00:20:42.864 "cntlid": 101, 00:20:42.864 "qid": 0, 00:20:42.864 "state": "enabled", 00:20:42.864 "thread": "nvmf_tgt_poll_group_000", 00:20:42.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:42.864 "listen_address": { 00:20:42.864 "trtype": "TCP", 00:20:42.864 "adrfam": "IPv4", 00:20:42.864 "traddr": "10.0.0.2", 00:20:42.864 "trsvcid": "4420" 00:20:42.864 }, 00:20:42.864 "peer_address": { 00:20:42.864 "trtype": "TCP", 00:20:42.864 "adrfam": "IPv4", 00:20:42.864 "traddr": "10.0.0.1", 00:20:42.864 "trsvcid": "38892" 00:20:42.864 }, 00:20:42.864 "auth": { 00:20:42.864 "state": "completed", 00:20:42.864 "digest": "sha512", 00:20:42.864 "dhgroup": "null" 00:20:42.864 } 00:20:42.864 } 00:20:42.864 ]' 00:20:42.864 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.864 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.864 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.864 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:42.864 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.864 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.864 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.864 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.123 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:20:43.123 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:20:43.692 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.692 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:43.692 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.692 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.692 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.692 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.692 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:43.692 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:43.951 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:43.951 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.951 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:43.951 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:43.951 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:43.951 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.951 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:43.951 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.951 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.951 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.951 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:43.951 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.951 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.951 00:20:44.210 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.210 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.210 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.210 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.210 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.210 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.210 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.210 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.210 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.210 { 00:20:44.210 "cntlid": 103, 00:20:44.210 "qid": 0, 00:20:44.210 "state": "enabled", 00:20:44.210 "thread": "nvmf_tgt_poll_group_000", 00:20:44.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:44.210 "listen_address": { 00:20:44.210 "trtype": "TCP", 00:20:44.210 "adrfam": "IPv4", 00:20:44.210 "traddr": "10.0.0.2", 00:20:44.210 "trsvcid": "4420" 00:20:44.210 }, 00:20:44.210 "peer_address": { 00:20:44.210 "trtype": "TCP", 00:20:44.210 "adrfam": "IPv4", 00:20:44.210 "traddr": "10.0.0.1", 00:20:44.210 "trsvcid": "38924" 00:20:44.210 }, 00:20:44.210 "auth": { 00:20:44.210 "state": "completed", 00:20:44.210 "digest": "sha512", 00:20:44.210 "dhgroup": "null" 00:20:44.210 } 00:20:44.210 } 00:20:44.210 ]' 00:20:44.210 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.210 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.210 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.469 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:44.469 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.469 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.469 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.469 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.728 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:44.728 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:45.296 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.296 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:45.296 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.296 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.296 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.296 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.296 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.296 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:45.296 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:45.296 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:45.297 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.297 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:45.297 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:45.297 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:45.297 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.297 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.297 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.297 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.297 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.297 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.297 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.297 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.556 00:20:45.556 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.556 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.556 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.815 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.815 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.815 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.815 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.815 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.815 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.815 { 00:20:45.815 "cntlid": 105, 00:20:45.815 "qid": 0, 00:20:45.815 "state": "enabled", 00:20:45.815 "thread": "nvmf_tgt_poll_group_000", 00:20:45.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:45.815 "listen_address": { 00:20:45.815 "trtype": "TCP", 00:20:45.815 "adrfam": "IPv4", 00:20:45.815 "traddr": "10.0.0.2", 00:20:45.815 "trsvcid": "4420" 00:20:45.815 }, 00:20:45.815 "peer_address": { 00:20:45.815 "trtype": "TCP", 00:20:45.815 "adrfam": "IPv4", 00:20:45.815 "traddr": "10.0.0.1", 00:20:45.815 "trsvcid": "38966" 00:20:45.815 }, 00:20:45.815 "auth": { 00:20:45.815 "state": "completed", 00:20:45.815 "digest": "sha512", 00:20:45.815 "dhgroup": "ffdhe2048" 00:20:45.815 } 00:20:45.815 } 00:20:45.815 ]' 00:20:45.815 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.815 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.815 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.815 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:45.815 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.075 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.075 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.075 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.075 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:46.075 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:46.643 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.643 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:46.643 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.643 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.643 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.643 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.643 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:46.643 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:46.903 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:46.903 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.903 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:46.903 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:46.903 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:46.903 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.903 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.903 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.903 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.903 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.903 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.903 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.903 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.162 00:20:47.162 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.162 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.162 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.422 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.422 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.422 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.422 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.422 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.422 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.422 { 00:20:47.422 "cntlid": 107, 00:20:47.422 "qid": 0, 00:20:47.422 "state": "enabled", 00:20:47.422 "thread": "nvmf_tgt_poll_group_000", 00:20:47.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:47.422 "listen_address": { 00:20:47.422 "trtype": "TCP", 00:20:47.422 "adrfam": "IPv4", 00:20:47.422 "traddr": "10.0.0.2", 00:20:47.422 "trsvcid": "4420" 00:20:47.422 }, 00:20:47.422 "peer_address": { 00:20:47.422 "trtype": "TCP", 00:20:47.422 "adrfam": "IPv4", 00:20:47.422 "traddr": "10.0.0.1", 00:20:47.422 "trsvcid": "38980" 00:20:47.422 }, 00:20:47.422 "auth": { 00:20:47.422 "state": "completed", 00:20:47.422 "digest": "sha512", 00:20:47.422 "dhgroup": "ffdhe2048" 00:20:47.422 } 00:20:47.422 } 00:20:47.422 ]' 00:20:47.422 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.422 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.422 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.422 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:47.422 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.422 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.422 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.422 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.681 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:20:47.681 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:20:48.249 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.249 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:48.249 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.249 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.249 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.249 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.249 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:48.249 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:48.509 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:48.509 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.509 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:48.509 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:48.509 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:48.509 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.509 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.509 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.509 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.509 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.509 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.509 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.509 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.768 00:20:48.768 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.768 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.768 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.028 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.028 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.028 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.028 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.028 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.028 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.028 { 00:20:49.028 "cntlid": 109, 00:20:49.028 "qid": 0, 00:20:49.028 "state": "enabled", 00:20:49.028 "thread": "nvmf_tgt_poll_group_000", 00:20:49.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:49.028 "listen_address": { 00:20:49.028 "trtype": "TCP", 00:20:49.028 "adrfam": "IPv4", 00:20:49.028 "traddr": "10.0.0.2", 00:20:49.028 "trsvcid": "4420" 00:20:49.028 }, 00:20:49.028 "peer_address": { 00:20:49.028 "trtype": "TCP", 00:20:49.028 "adrfam": "IPv4", 00:20:49.028 "traddr": "10.0.0.1", 00:20:49.028 "trsvcid": "39000" 00:20:49.028 }, 00:20:49.028 "auth": { 00:20:49.028 "state": "completed", 00:20:49.028 "digest": "sha512", 00:20:49.028 "dhgroup": "ffdhe2048" 00:20:49.028 } 00:20:49.028 } 00:20:49.028 ]' 00:20:49.028 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.028 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.028 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.028 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:49.028 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.028 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.028 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.028 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.287 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:20:49.287 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:20:49.936 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.936 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:49.936 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.936 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.936 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.936 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.936 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:49.936 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:49.936 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:49.936 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.937 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:49.937 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:49.937 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:49.937 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.937 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:49.937 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.937 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.937 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.937 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:49.937 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.937 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.196 00:20:50.196 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.196 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.196 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.456 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.456 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.456 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.456 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.456 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.456 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.456 { 00:20:50.456 "cntlid": 111, 00:20:50.456 "qid": 0, 00:20:50.456 "state": "enabled", 00:20:50.456 "thread": "nvmf_tgt_poll_group_000", 00:20:50.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:50.456 "listen_address": { 00:20:50.456 "trtype": "TCP", 00:20:50.456 "adrfam": "IPv4", 00:20:50.456 "traddr": "10.0.0.2", 00:20:50.456 "trsvcid": "4420" 00:20:50.456 }, 00:20:50.456 "peer_address": { 00:20:50.456 "trtype": "TCP", 00:20:50.456 "adrfam": "IPv4", 00:20:50.456 "traddr": "10.0.0.1", 00:20:50.456 "trsvcid": "49606" 00:20:50.456 }, 00:20:50.456 "auth": { 00:20:50.456 "state": "completed", 00:20:50.456 "digest": "sha512", 00:20:50.456 "dhgroup": "ffdhe2048" 00:20:50.456 } 00:20:50.456 } 00:20:50.456 ]' 00:20:50.456 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.456 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.456 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.456 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:50.456 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.715 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.715 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.715 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.715 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:50.715 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:51.285 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.285 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:51.285 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.285 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.285 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.285 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.285 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.285 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:51.285 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:51.544 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:51.544 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.544 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:51.544 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:51.544 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:51.544 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.544 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.544 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.544 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.544 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.544 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.544 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.544 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.804 00:20:51.804 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.804 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.804 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.063 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.063 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.063 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.063 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.064 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.064 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.064 { 00:20:52.064 "cntlid": 113, 00:20:52.064 "qid": 0, 00:20:52.064 "state": "enabled", 00:20:52.064 "thread": "nvmf_tgt_poll_group_000", 00:20:52.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:52.064 "listen_address": { 00:20:52.064 "trtype": "TCP", 00:20:52.064 "adrfam": "IPv4", 00:20:52.064 "traddr": "10.0.0.2", 00:20:52.064 "trsvcid": "4420" 00:20:52.064 }, 00:20:52.064 "peer_address": { 00:20:52.064 "trtype": "TCP", 00:20:52.064 "adrfam": "IPv4", 00:20:52.064 "traddr": "10.0.0.1", 00:20:52.064 "trsvcid": "49632" 00:20:52.064 }, 00:20:52.064 "auth": { 00:20:52.064 "state": "completed", 00:20:52.064 "digest": "sha512", 00:20:52.064 "dhgroup": "ffdhe3072" 00:20:52.064 } 00:20:52.064 } 00:20:52.064 ]' 00:20:52.064 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.064 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.064 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.064 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:52.064 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.064 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.064 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.064 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.323 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:52.323 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:52.893 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.893 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:52.893 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.893 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.893 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.893 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.893 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:52.893 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.152 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:53.152 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.152 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:53.152 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:53.152 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:53.152 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.152 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.152 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.152 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.152 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.152 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.152 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.152 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.412 00:20:53.412 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.412 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.412 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.671 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.671 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.671 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.671 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.671 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.671 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.671 { 00:20:53.671 "cntlid": 115, 00:20:53.671 "qid": 0, 00:20:53.671 "state": "enabled", 00:20:53.671 "thread": "nvmf_tgt_poll_group_000", 00:20:53.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:53.671 "listen_address": { 00:20:53.671 "trtype": "TCP", 00:20:53.671 "adrfam": "IPv4", 00:20:53.671 "traddr": "10.0.0.2", 00:20:53.671 "trsvcid": "4420" 00:20:53.671 }, 00:20:53.671 "peer_address": { 00:20:53.671 "trtype": "TCP", 00:20:53.671 "adrfam": "IPv4", 00:20:53.671 "traddr": "10.0.0.1", 00:20:53.671 "trsvcid": "49644" 00:20:53.671 }, 00:20:53.671 "auth": { 00:20:53.671 "state": "completed", 00:20:53.671 "digest": "sha512", 00:20:53.671 "dhgroup": "ffdhe3072" 00:20:53.671 } 00:20:53.671 } 00:20:53.671 ]' 00:20:53.671 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.671 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.671 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.671 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:53.671 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.671 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.671 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.671 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.930 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:20:53.930 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:20:54.497 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.497 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:54.497 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.497 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.497 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.497 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.497 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:54.497 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:54.755 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:54.755 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.755 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:54.755 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:54.755 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:54.755 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.755 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.755 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.755 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.755 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.755 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.755 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.755 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.013 00:20:55.013 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.013 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.013 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.272 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.272 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.272 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.272 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.272 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.272 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.272 { 00:20:55.272 "cntlid": 117, 00:20:55.272 "qid": 0, 00:20:55.272 "state": "enabled", 00:20:55.272 "thread": "nvmf_tgt_poll_group_000", 00:20:55.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:55.272 "listen_address": { 00:20:55.272 "trtype": "TCP", 00:20:55.272 "adrfam": "IPv4", 00:20:55.272 "traddr": "10.0.0.2", 00:20:55.272 "trsvcid": "4420" 00:20:55.272 }, 00:20:55.272 "peer_address": { 00:20:55.272 "trtype": "TCP", 00:20:55.272 "adrfam": "IPv4", 00:20:55.272 "traddr": "10.0.0.1", 00:20:55.272 "trsvcid": "49680" 00:20:55.272 }, 00:20:55.272 "auth": { 00:20:55.272 "state": "completed", 00:20:55.272 "digest": "sha512", 00:20:55.272 "dhgroup": "ffdhe3072" 00:20:55.272 } 00:20:55.272 } 00:20:55.272 ]' 00:20:55.272 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.272 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.272 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.272 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:55.272 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.272 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.272 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.272 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.531 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:20:55.531 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:20:56.097 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.097 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:56.097 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.097 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.097 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.097 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.097 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:56.097 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:56.356 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:56.356 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.356 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:56.356 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:56.356 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:56.356 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.356 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:56.356 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.356 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.356 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.356 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:56.356 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:56.356 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:56.615 00:20:56.615 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.615 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.615 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.615 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.615 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.615 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.615 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.615 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.615 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.615 { 00:20:56.615 "cntlid": 119, 00:20:56.615 "qid": 0, 00:20:56.615 "state": "enabled", 00:20:56.615 "thread": "nvmf_tgt_poll_group_000", 00:20:56.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:56.615 "listen_address": { 00:20:56.615 "trtype": "TCP", 00:20:56.615 "adrfam": "IPv4", 00:20:56.615 "traddr": "10.0.0.2", 00:20:56.615 "trsvcid": "4420" 00:20:56.615 }, 00:20:56.615 "peer_address": { 00:20:56.615 "trtype": "TCP", 00:20:56.615 "adrfam": "IPv4", 00:20:56.615 "traddr": "10.0.0.1", 00:20:56.615 "trsvcid": "49718" 00:20:56.615 }, 00:20:56.615 "auth": { 00:20:56.615 "state": "completed", 00:20:56.615 "digest": "sha512", 00:20:56.615 "dhgroup": "ffdhe3072" 00:20:56.615 } 00:20:56.615 } 00:20:56.615 ]' 00:20:56.615 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.873 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.873 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.873 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:56.873 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.873 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.873 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.873 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.132 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:57.132 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:20:57.702 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.703 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.962 00:20:58.222 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.222 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.222 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.222 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.222 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.222 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.222 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.222 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.222 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.222 { 00:20:58.222 "cntlid": 121, 00:20:58.222 "qid": 0, 00:20:58.222 "state": "enabled", 00:20:58.222 "thread": "nvmf_tgt_poll_group_000", 00:20:58.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:58.222 "listen_address": { 00:20:58.222 "trtype": "TCP", 00:20:58.222 "adrfam": "IPv4", 00:20:58.222 "traddr": "10.0.0.2", 00:20:58.222 "trsvcid": "4420" 00:20:58.222 }, 00:20:58.222 "peer_address": { 00:20:58.222 "trtype": "TCP", 00:20:58.222 "adrfam": "IPv4", 00:20:58.222 "traddr": "10.0.0.1", 00:20:58.222 "trsvcid": "49748" 00:20:58.222 }, 00:20:58.222 "auth": { 00:20:58.222 "state": "completed", 00:20:58.222 "digest": "sha512", 00:20:58.222 "dhgroup": "ffdhe4096" 00:20:58.222 } 00:20:58.222 } 00:20:58.222 ]' 00:20:58.222 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.222 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.222 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.482 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:58.482 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.482 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.482 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.482 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.742 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:58.742 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.311 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.570 00:20:59.570 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.570 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.570 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.829 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.829 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.829 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.829 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.829 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.829 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.829 { 00:20:59.829 "cntlid": 123, 00:20:59.829 "qid": 0, 00:20:59.829 "state": "enabled", 00:20:59.829 "thread": "nvmf_tgt_poll_group_000", 00:20:59.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:59.829 "listen_address": { 00:20:59.829 "trtype": "TCP", 00:20:59.829 "adrfam": "IPv4", 00:20:59.829 "traddr": "10.0.0.2", 00:20:59.829 "trsvcid": "4420" 00:20:59.829 }, 00:20:59.829 "peer_address": { 00:20:59.829 "trtype": "TCP", 00:20:59.829 "adrfam": "IPv4", 00:20:59.829 "traddr": "10.0.0.1", 00:20:59.829 "trsvcid": "49786" 00:20:59.829 }, 00:20:59.829 "auth": { 00:20:59.829 "state": "completed", 00:20:59.829 "digest": "sha512", 00:20:59.829 "dhgroup": "ffdhe4096" 00:20:59.829 } 00:20:59.829 } 00:20:59.829 ]' 00:20:59.829 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.829 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.829 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.088 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:00.088 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.088 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.088 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.088 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.347 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:21:00.347 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.917 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.176 00:21:01.176 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.176 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.176 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.434 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.434 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.434 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.434 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.434 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.435 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.435 { 00:21:01.435 "cntlid": 125, 00:21:01.435 "qid": 0, 00:21:01.435 "state": "enabled", 00:21:01.435 "thread": "nvmf_tgt_poll_group_000", 00:21:01.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:01.435 "listen_address": { 00:21:01.435 "trtype": "TCP", 00:21:01.435 "adrfam": "IPv4", 00:21:01.435 "traddr": "10.0.0.2", 00:21:01.435 "trsvcid": "4420" 00:21:01.435 }, 00:21:01.435 "peer_address": { 00:21:01.435 "trtype": "TCP", 00:21:01.435 "adrfam": "IPv4", 00:21:01.435 "traddr": "10.0.0.1", 00:21:01.435 "trsvcid": "55372" 00:21:01.435 }, 00:21:01.435 "auth": { 00:21:01.435 "state": "completed", 00:21:01.435 "digest": "sha512", 00:21:01.435 "dhgroup": "ffdhe4096" 00:21:01.435 } 00:21:01.435 } 00:21:01.435 ]' 00:21:01.435 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.435 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.435 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.694 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:01.694 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.694 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.694 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.694 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.694 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:21:01.694 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:21:02.262 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.262 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:02.262 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.262 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.262 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.262 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.262 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:02.262 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:02.522 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:02.522 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.522 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:02.522 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:02.522 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:02.522 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.522 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:02.522 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.522 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.522 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.522 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:02.522 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:02.522 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:02.781 00:21:02.781 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.781 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.781 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.040 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.041 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.041 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.041 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.041 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.041 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.041 { 00:21:03.041 "cntlid": 127, 00:21:03.041 "qid": 0, 00:21:03.041 "state": "enabled", 00:21:03.041 "thread": "nvmf_tgt_poll_group_000", 00:21:03.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:03.041 "listen_address": { 00:21:03.041 "trtype": "TCP", 00:21:03.041 "adrfam": "IPv4", 00:21:03.041 "traddr": "10.0.0.2", 00:21:03.041 "trsvcid": "4420" 00:21:03.041 }, 00:21:03.041 "peer_address": { 00:21:03.041 "trtype": "TCP", 00:21:03.041 "adrfam": "IPv4", 00:21:03.041 "traddr": "10.0.0.1", 00:21:03.041 "trsvcid": "55404" 00:21:03.041 }, 00:21:03.041 "auth": { 00:21:03.041 "state": "completed", 00:21:03.041 "digest": "sha512", 00:21:03.041 "dhgroup": "ffdhe4096" 00:21:03.041 } 00:21:03.041 } 00:21:03.041 ]' 00:21:03.041 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.041 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.041 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.041 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:03.041 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.300 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.300 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.300 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.300 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:21:03.301 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:21:03.869 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.869 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:03.869 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.869 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.869 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.869 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.869 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.869 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.869 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:04.128 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:04.128 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.128 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.128 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:04.128 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:04.128 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.128 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.128 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.128 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.128 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.128 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.129 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.129 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.387 00:21:04.647 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.647 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.647 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.647 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.647 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.647 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.647 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.647 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.647 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.647 { 00:21:04.647 "cntlid": 129, 00:21:04.647 "qid": 0, 00:21:04.647 "state": "enabled", 00:21:04.647 "thread": "nvmf_tgt_poll_group_000", 00:21:04.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:04.647 "listen_address": { 00:21:04.647 "trtype": "TCP", 00:21:04.647 "adrfam": "IPv4", 00:21:04.647 "traddr": "10.0.0.2", 00:21:04.647 "trsvcid": "4420" 00:21:04.647 }, 00:21:04.647 "peer_address": { 00:21:04.647 "trtype": "TCP", 00:21:04.647 "adrfam": "IPv4", 00:21:04.647 "traddr": "10.0.0.1", 00:21:04.647 "trsvcid": "55430" 00:21:04.647 }, 00:21:04.647 "auth": { 00:21:04.647 "state": "completed", 00:21:04.647 "digest": "sha512", 00:21:04.647 "dhgroup": "ffdhe6144" 00:21:04.647 } 00:21:04.647 } 00:21:04.647 ]' 00:21:04.647 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.647 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.647 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.907 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:04.907 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.907 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.907 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.907 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.166 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:21:05.166 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.735 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.302 00:21:06.302 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.302 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.302 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.302 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.302 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.302 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.302 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.302 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.302 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.302 { 00:21:06.302 "cntlid": 131, 00:21:06.302 "qid": 0, 00:21:06.302 "state": "enabled", 00:21:06.302 "thread": "nvmf_tgt_poll_group_000", 00:21:06.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:06.302 "listen_address": { 00:21:06.302 "trtype": "TCP", 00:21:06.302 "adrfam": "IPv4", 00:21:06.302 "traddr": "10.0.0.2", 00:21:06.302 "trsvcid": "4420" 00:21:06.302 }, 00:21:06.302 "peer_address": { 00:21:06.302 "trtype": "TCP", 00:21:06.302 "adrfam": "IPv4", 00:21:06.302 "traddr": "10.0.0.1", 00:21:06.302 "trsvcid": "55464" 00:21:06.302 }, 00:21:06.302 "auth": { 00:21:06.302 "state": "completed", 00:21:06.302 "digest": "sha512", 00:21:06.302 "dhgroup": "ffdhe6144" 00:21:06.302 } 00:21:06.302 } 00:21:06.302 ]' 00:21:06.302 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.562 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.562 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.562 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:06.562 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.562 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.562 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.562 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.821 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:21:06.821 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.389 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.957 00:21:07.957 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.957 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.957 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.957 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.957 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.958 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.958 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.958 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.958 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.958 { 00:21:07.958 "cntlid": 133, 00:21:07.958 "qid": 0, 00:21:07.958 "state": "enabled", 00:21:07.958 "thread": "nvmf_tgt_poll_group_000", 00:21:07.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:07.958 "listen_address": { 00:21:07.958 "trtype": "TCP", 00:21:07.958 "adrfam": "IPv4", 00:21:07.958 "traddr": "10.0.0.2", 00:21:07.958 "trsvcid": "4420" 00:21:07.958 }, 00:21:07.958 "peer_address": { 00:21:07.958 "trtype": "TCP", 00:21:07.958 "adrfam": "IPv4", 00:21:07.958 "traddr": "10.0.0.1", 00:21:07.958 "trsvcid": "55478" 00:21:07.958 }, 00:21:07.958 "auth": { 00:21:07.958 "state": "completed", 00:21:07.958 "digest": "sha512", 00:21:07.958 "dhgroup": "ffdhe6144" 00:21:07.958 } 00:21:07.958 } 00:21:07.958 ]' 00:21:07.958 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.217 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.217 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.217 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:08.217 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.217 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.217 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.217 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.477 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:21:08.477 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.616 00:21:09.616 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.616 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.616 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.616 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.616 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.616 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.616 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.616 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.616 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.616 { 00:21:09.616 "cntlid": 135, 00:21:09.616 "qid": 0, 00:21:09.616 "state": "enabled", 00:21:09.616 "thread": "nvmf_tgt_poll_group_000", 00:21:09.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:09.616 "listen_address": { 00:21:09.616 "trtype": "TCP", 00:21:09.616 "adrfam": "IPv4", 00:21:09.616 "traddr": "10.0.0.2", 00:21:09.616 "trsvcid": "4420" 00:21:09.616 }, 00:21:09.616 "peer_address": { 00:21:09.616 "trtype": "TCP", 00:21:09.616 "adrfam": "IPv4", 00:21:09.616 "traddr": "10.0.0.1", 00:21:09.616 "trsvcid": "55512" 00:21:09.616 }, 00:21:09.616 "auth": { 00:21:09.616 "state": "completed", 00:21:09.616 "digest": "sha512", 00:21:09.616 "dhgroup": "ffdhe6144" 00:21:09.616 } 00:21:09.616 } 00:21:09.616 ]' 00:21:09.616 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.876 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.876 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.876 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:09.876 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.876 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.876 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.876 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.135 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:21:10.135 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.705 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.273 00:21:11.273 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.273 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.273 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.532 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.532 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.532 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.532 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.532 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.532 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.532 { 00:21:11.532 "cntlid": 137, 00:21:11.532 "qid": 0, 00:21:11.532 "state": "enabled", 00:21:11.532 "thread": "nvmf_tgt_poll_group_000", 00:21:11.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:11.532 "listen_address": { 00:21:11.532 "trtype": "TCP", 00:21:11.532 "adrfam": "IPv4", 00:21:11.532 "traddr": "10.0.0.2", 00:21:11.532 "trsvcid": "4420" 00:21:11.532 }, 00:21:11.532 "peer_address": { 00:21:11.532 "trtype": "TCP", 00:21:11.532 "adrfam": "IPv4", 00:21:11.532 "traddr": "10.0.0.1", 00:21:11.532 "trsvcid": "46132" 00:21:11.532 }, 00:21:11.532 "auth": { 00:21:11.532 "state": "completed", 00:21:11.532 "digest": "sha512", 00:21:11.532 "dhgroup": "ffdhe8192" 00:21:11.532 } 00:21:11.532 } 00:21:11.532 ]' 00:21:11.532 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.532 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.532 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.532 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:11.532 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.532 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.532 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.532 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.791 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:21:11.791 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:21:12.360 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.360 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:12.360 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.360 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.360 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.360 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.360 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:12.360 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:12.620 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:12.620 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.620 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.620 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:12.620 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:12.620 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.620 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.620 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.620 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.620 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.620 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.620 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.620 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.189 00:21:13.189 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.189 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.189 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.189 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.189 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.189 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.189 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.189 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.189 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.189 { 00:21:13.189 "cntlid": 139, 00:21:13.189 "qid": 0, 00:21:13.189 "state": "enabled", 00:21:13.189 "thread": "nvmf_tgt_poll_group_000", 00:21:13.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:13.189 "listen_address": { 00:21:13.189 "trtype": "TCP", 00:21:13.189 "adrfam": "IPv4", 00:21:13.189 "traddr": "10.0.0.2", 00:21:13.189 "trsvcid": "4420" 00:21:13.189 }, 00:21:13.189 "peer_address": { 00:21:13.189 "trtype": "TCP", 00:21:13.189 "adrfam": "IPv4", 00:21:13.189 "traddr": "10.0.0.1", 00:21:13.189 "trsvcid": "46174" 00:21:13.189 }, 00:21:13.189 "auth": { 00:21:13.189 "state": "completed", 00:21:13.189 "digest": "sha512", 00:21:13.189 "dhgroup": "ffdhe8192" 00:21:13.189 } 00:21:13.189 } 00:21:13.189 ]' 00:21:13.189 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.189 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.189 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.447 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:13.447 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.447 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.447 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.447 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.447 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:21:13.705 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: --dhchap-ctrl-secret DHHC-1:02:M2NiZjUwZjc1MjQzNDQ0MTI0M2M2ODgyYjNhZGU4MTA4NmRmMjQ5NmRhZDdmMDVmEftGTQ==: 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.271 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.837 00:21:14.837 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.837 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.837 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.096 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.096 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.096 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.096 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.096 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.096 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.096 { 00:21:15.096 "cntlid": 141, 00:21:15.096 "qid": 0, 00:21:15.096 "state": "enabled", 00:21:15.096 "thread": "nvmf_tgt_poll_group_000", 00:21:15.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:15.096 "listen_address": { 00:21:15.096 "trtype": "TCP", 00:21:15.096 "adrfam": "IPv4", 00:21:15.096 "traddr": "10.0.0.2", 00:21:15.096 "trsvcid": "4420" 00:21:15.096 }, 00:21:15.096 "peer_address": { 00:21:15.096 "trtype": "TCP", 00:21:15.096 "adrfam": "IPv4", 00:21:15.096 "traddr": "10.0.0.1", 00:21:15.096 "trsvcid": "46194" 00:21:15.096 }, 00:21:15.096 "auth": { 00:21:15.096 "state": "completed", 00:21:15.096 "digest": "sha512", 00:21:15.096 "dhgroup": "ffdhe8192" 00:21:15.096 } 00:21:15.096 } 00:21:15.096 ]' 00:21:15.096 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.096 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.096 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.096 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.096 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.096 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.096 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.096 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.354 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:21:15.355 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:01:ZDIwNTg4MzM0ZWUzMjAzZmJkODlkZGExMGJjZjRiZDiob5MA: 00:21:15.922 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.922 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:15.922 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.922 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.922 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.922 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.922 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:15.922 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.181 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:16.181 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.181 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.181 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:16.181 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:16.181 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.181 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:16.181 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.181 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.181 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.181 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:16.181 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.181 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.749 00:21:16.749 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.749 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.749 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.749 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.749 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.749 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.749 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.749 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.749 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.749 { 00:21:16.749 "cntlid": 143, 00:21:16.749 "qid": 0, 00:21:16.749 "state": "enabled", 00:21:16.749 "thread": "nvmf_tgt_poll_group_000", 00:21:16.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:16.749 "listen_address": { 00:21:16.749 "trtype": "TCP", 00:21:16.749 "adrfam": "IPv4", 00:21:16.749 "traddr": "10.0.0.2", 00:21:16.749 "trsvcid": "4420" 00:21:16.749 }, 00:21:16.749 "peer_address": { 00:21:16.749 "trtype": "TCP", 00:21:16.749 "adrfam": "IPv4", 00:21:16.749 "traddr": "10.0.0.1", 00:21:16.749 "trsvcid": "46220" 00:21:16.749 }, 00:21:16.749 "auth": { 00:21:16.749 "state": "completed", 00:21:16.749 "digest": "sha512", 00:21:16.749 "dhgroup": "ffdhe8192" 00:21:16.749 } 00:21:16.749 } 00:21:16.749 ]' 00:21:16.749 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.009 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.009 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.009 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:17.009 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.009 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.009 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.009 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.267 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:21:17.267 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:21:17.835 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.835 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.835 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.836 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.404 00:21:18.404 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.404 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.404 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.663 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.663 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.663 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.663 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.663 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.663 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.663 { 00:21:18.663 "cntlid": 145, 00:21:18.663 "qid": 0, 00:21:18.663 "state": "enabled", 00:21:18.663 "thread": "nvmf_tgt_poll_group_000", 00:21:18.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:18.664 "listen_address": { 00:21:18.664 "trtype": "TCP", 00:21:18.664 "adrfam": "IPv4", 00:21:18.664 "traddr": "10.0.0.2", 00:21:18.664 "trsvcid": "4420" 00:21:18.664 }, 00:21:18.664 "peer_address": { 00:21:18.664 "trtype": "TCP", 00:21:18.664 "adrfam": "IPv4", 00:21:18.664 "traddr": "10.0.0.1", 00:21:18.664 "trsvcid": "46240" 00:21:18.664 }, 00:21:18.664 "auth": { 00:21:18.664 "state": "completed", 00:21:18.664 "digest": "sha512", 00:21:18.664 "dhgroup": "ffdhe8192" 00:21:18.664 } 00:21:18.664 } 00:21:18.664 ]' 00:21:18.664 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.664 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.664 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.664 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:18.664 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.664 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.664 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.664 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.922 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:21:18.922 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY1Y2UyYmY4ODU1MTA1ZWU4NmQyOThiMzI2MWU3MGNlZWUxMmMxMTVjMGFhZGMyqe134w==: --dhchap-ctrl-secret DHHC-1:03:N2YyMWVkYjc3ZWUwODgzZWViMGFiMDAyYTFiMThmMGU2ZTM0YmM2MGI4ZWI3M2I1OTI2MWY0M2MzY2FmN2U4NBc9UYE=: 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:19.489 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:20.058 request: 00:21:20.058 { 00:21:20.058 "name": "nvme0", 00:21:20.058 "trtype": "tcp", 00:21:20.058 "traddr": "10.0.0.2", 00:21:20.058 "adrfam": "ipv4", 00:21:20.058 "trsvcid": "4420", 00:21:20.058 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:20.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:20.058 "prchk_reftag": false, 00:21:20.058 "prchk_guard": false, 00:21:20.058 "hdgst": false, 00:21:20.058 "ddgst": false, 00:21:20.058 "dhchap_key": "key2", 00:21:20.058 "allow_unrecognized_csi": false, 00:21:20.058 "method": "bdev_nvme_attach_controller", 00:21:20.058 "req_id": 1 00:21:20.058 } 00:21:20.058 Got JSON-RPC error response 00:21:20.058 response: 00:21:20.058 { 00:21:20.058 "code": -5, 00:21:20.058 "message": "Input/output error" 00:21:20.058 } 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:20.058 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:20.318 request: 00:21:20.318 { 00:21:20.318 "name": "nvme0", 00:21:20.318 "trtype": "tcp", 00:21:20.318 "traddr": "10.0.0.2", 00:21:20.318 "adrfam": "ipv4", 00:21:20.318 "trsvcid": "4420", 00:21:20.318 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:20.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:20.318 "prchk_reftag": false, 00:21:20.318 "prchk_guard": false, 00:21:20.318 "hdgst": false, 00:21:20.318 "ddgst": false, 00:21:20.318 "dhchap_key": "key1", 00:21:20.318 "dhchap_ctrlr_key": "ckey2", 00:21:20.318 "allow_unrecognized_csi": false, 00:21:20.318 "method": "bdev_nvme_attach_controller", 00:21:20.318 "req_id": 1 00:21:20.318 } 00:21:20.318 Got JSON-RPC error response 00:21:20.318 response: 00:21:20.318 { 00:21:20.318 "code": -5, 00:21:20.318 "message": "Input/output error" 00:21:20.318 } 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.318 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.887 request: 00:21:20.887 { 00:21:20.887 "name": "nvme0", 00:21:20.887 "trtype": "tcp", 00:21:20.887 "traddr": "10.0.0.2", 00:21:20.887 "adrfam": "ipv4", 00:21:20.887 "trsvcid": "4420", 00:21:20.887 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:20.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:20.887 "prchk_reftag": false, 00:21:20.887 "prchk_guard": false, 00:21:20.887 "hdgst": false, 00:21:20.887 "ddgst": false, 00:21:20.887 "dhchap_key": "key1", 00:21:20.887 "dhchap_ctrlr_key": "ckey1", 00:21:20.887 "allow_unrecognized_csi": false, 00:21:20.887 "method": "bdev_nvme_attach_controller", 00:21:20.887 "req_id": 1 00:21:20.887 } 00:21:20.887 Got JSON-RPC error response 00:21:20.887 response: 00:21:20.887 { 00:21:20.887 "code": -5, 00:21:20.887 "message": "Input/output error" 00:21:20.887 } 00:21:20.887 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:20.887 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:20.887 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:20.887 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:20.887 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:20.887 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.887 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.888 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.888 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2051696 00:21:20.888 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2051696 ']' 00:21:20.888 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2051696 00:21:20.888 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:20.888 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:20.888 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2051696 00:21:20.888 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:20.888 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:20.888 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2051696' 00:21:20.888 killing process with pid 2051696 00:21:20.888 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2051696 00:21:20.888 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2051696 00:21:21.147 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:21.147 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:21.147 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:21.147 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.147 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=2073139 00:21:21.147 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 2073139 00:21:21.147 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:21.147 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2073139 ']' 00:21:21.147 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.147 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:21.147 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.147 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:21.147 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.406 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:21.406 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:21.406 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:21.406 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:21.406 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.406 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.406 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:21.406 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2073139 00:21:21.406 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2073139 ']' 00:21:21.406 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.406 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:21.406 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.406 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:21.406 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.665 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:21.665 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:21.665 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:21.665 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.665 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.665 null0 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GaU 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.xgX ]] 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xgX 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.RZo 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Elr ]] 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Elr 00:21:21.665 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5dl 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.iN9 ]] 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iN9 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Svg 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.666 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.602 nvme0n1 00:21:22.602 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.602 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.602 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.602 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.602 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.602 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.602 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.602 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.602 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.602 { 00:21:22.602 "cntlid": 1, 00:21:22.602 "qid": 0, 00:21:22.602 "state": "enabled", 00:21:22.602 "thread": "nvmf_tgt_poll_group_000", 00:21:22.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:22.602 "listen_address": { 00:21:22.602 "trtype": "TCP", 00:21:22.602 "adrfam": "IPv4", 00:21:22.602 "traddr": "10.0.0.2", 00:21:22.602 "trsvcid": "4420" 00:21:22.602 }, 00:21:22.602 "peer_address": { 00:21:22.602 "trtype": "TCP", 00:21:22.602 "adrfam": "IPv4", 00:21:22.602 "traddr": "10.0.0.1", 00:21:22.602 "trsvcid": "46468" 00:21:22.602 }, 00:21:22.602 "auth": { 00:21:22.602 "state": "completed", 00:21:22.602 "digest": "sha512", 00:21:22.602 "dhgroup": "ffdhe8192" 00:21:22.602 } 00:21:22.602 } 00:21:22.602 ]' 00:21:22.602 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.602 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.602 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.860 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:22.860 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.861 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.861 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.861 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.861 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:21:22.861 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:21:23.430 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.689 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.948 request: 00:21:23.948 { 00:21:23.948 "name": "nvme0", 00:21:23.948 "trtype": "tcp", 00:21:23.948 "traddr": "10.0.0.2", 00:21:23.948 "adrfam": "ipv4", 00:21:23.948 "trsvcid": "4420", 00:21:23.948 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:23.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:23.948 "prchk_reftag": false, 00:21:23.948 "prchk_guard": false, 00:21:23.948 "hdgst": false, 00:21:23.948 "ddgst": false, 00:21:23.948 "dhchap_key": "key3", 00:21:23.948 "allow_unrecognized_csi": false, 00:21:23.948 "method": "bdev_nvme_attach_controller", 00:21:23.948 "req_id": 1 00:21:23.948 } 00:21:23.948 Got JSON-RPC error response 00:21:23.948 response: 00:21:23.948 { 00:21:23.948 "code": -5, 00:21:23.948 "message": "Input/output error" 00:21:23.948 } 00:21:23.948 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:23.948 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:23.948 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:23.948 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:23.948 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:23.948 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:23.948 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:23.948 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:24.208 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:24.208 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:24.208 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:24.208 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:24.208 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:24.208 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:24.208 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:24.208 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:24.208 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.208 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.467 request: 00:21:24.467 { 00:21:24.467 "name": "nvme0", 00:21:24.467 "trtype": "tcp", 00:21:24.467 "traddr": "10.0.0.2", 00:21:24.467 "adrfam": "ipv4", 00:21:24.467 "trsvcid": "4420", 00:21:24.467 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:24.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:24.467 "prchk_reftag": false, 00:21:24.467 "prchk_guard": false, 00:21:24.467 "hdgst": false, 00:21:24.467 "ddgst": false, 00:21:24.467 "dhchap_key": "key3", 00:21:24.467 "allow_unrecognized_csi": false, 00:21:24.467 "method": "bdev_nvme_attach_controller", 00:21:24.467 "req_id": 1 00:21:24.467 } 00:21:24.467 Got JSON-RPC error response 00:21:24.467 response: 00:21:24.467 { 00:21:24.467 "code": -5, 00:21:24.467 "message": "Input/output error" 00:21:24.467 } 00:21:24.467 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:24.467 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:24.467 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:24.467 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:24.467 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:24.467 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:24.467 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:24.467 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:24.467 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:24.467 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:24.467 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:24.467 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.467 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.467 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.467 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:24.467 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.467 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.467 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.467 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:24.467 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:24.727 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:24.727 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:24.727 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:24.727 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:24.727 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:24.727 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:24.727 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:24.727 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:24.986 request: 00:21:24.986 { 00:21:24.986 "name": "nvme0", 00:21:24.986 "trtype": "tcp", 00:21:24.986 "traddr": "10.0.0.2", 00:21:24.986 "adrfam": "ipv4", 00:21:24.986 "trsvcid": "4420", 00:21:24.986 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:24.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:24.986 "prchk_reftag": false, 00:21:24.986 "prchk_guard": false, 00:21:24.986 "hdgst": false, 00:21:24.986 "ddgst": false, 00:21:24.986 "dhchap_key": "key0", 00:21:24.986 "dhchap_ctrlr_key": "key1", 00:21:24.986 "allow_unrecognized_csi": false, 00:21:24.986 "method": "bdev_nvme_attach_controller", 00:21:24.986 "req_id": 1 00:21:24.986 } 00:21:24.986 Got JSON-RPC error response 00:21:24.986 response: 00:21:24.986 { 00:21:24.986 "code": -5, 00:21:24.986 "message": "Input/output error" 00:21:24.986 } 00:21:24.986 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:24.986 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:24.986 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:24.986 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:24.986 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:24.986 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:24.986 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:25.245 nvme0n1 00:21:25.245 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:25.245 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:25.245 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.504 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.504 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.504 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.504 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:25.504 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.504 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.504 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.504 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:25.504 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:25.504 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:26.441 nvme0n1 00:21:26.441 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:26.441 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:26.441 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.441 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.441 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:26.441 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.441 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.441 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.441 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:26.441 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.441 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:26.700 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.700 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:21:26.700 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: --dhchap-ctrl-secret DHHC-1:03:NTFjZTg0YzQ4OTg4Mjg2Y2NhM2JkZmUzNzk3YTc1NTA4ZTRmMmFjMTljMGNiMWEwNGM5NmMwZDNlNGFhNTJmZha6tO8=: 00:21:27.268 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:27.268 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:27.268 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:27.268 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:27.268 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:27.268 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:27.268 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:27.268 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.268 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.527 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:27.527 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:27.527 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:27.527 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:27.527 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:27.527 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:27.527 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:27.527 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:27.527 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:27.527 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:27.786 request: 00:21:27.786 { 00:21:27.786 "name": "nvme0", 00:21:27.786 "trtype": "tcp", 00:21:27.786 "traddr": "10.0.0.2", 00:21:27.786 "adrfam": "ipv4", 00:21:27.786 "trsvcid": "4420", 00:21:27.786 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:27.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:27.786 "prchk_reftag": false, 00:21:27.786 "prchk_guard": false, 00:21:27.786 "hdgst": false, 00:21:27.786 "ddgst": false, 00:21:27.786 "dhchap_key": "key1", 00:21:27.786 "allow_unrecognized_csi": false, 00:21:27.786 "method": "bdev_nvme_attach_controller", 00:21:27.786 "req_id": 1 00:21:27.786 } 00:21:27.786 Got JSON-RPC error response 00:21:27.786 response: 00:21:27.786 { 00:21:27.786 "code": -5, 00:21:27.786 "message": "Input/output error" 00:21:27.786 } 00:21:27.786 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:27.786 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:27.786 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:27.786 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:27.786 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:27.786 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:27.787 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:28.872 nvme0n1 00:21:28.872 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:28.872 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.872 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:28.872 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.872 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.872 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.131 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:29.131 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.131 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.131 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.131 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:29.131 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:29.131 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:29.390 nvme0n1 00:21:29.390 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:29.390 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:29.390 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.390 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.390 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.390 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.648 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:29.648 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.648 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.648 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.648 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: '' 2s 00:21:29.648 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:29.648 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:29.648 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: 00:21:29.648 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:29.648 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:29.648 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:29.648 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: ]] 00:21:29.648 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Y2IxODRlOGRmODdjYmY2M2NlMWE5NjAxN2JmNjlhYWLmFoa0: 00:21:29.648 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:29.648 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:29.648 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: 2s 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: ]] 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YzU1MjgyN2MxYjFkYjIwM2Y5Yzg2MzdkYzNhOGIwMzU0NTA5YjhmZTdlNTdmMDIx/xjaWg==: 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:32.176 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:34.072 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:34.072 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:21:34.072 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:34.072 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:34.072 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:34.072 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:34.072 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:21:34.072 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.073 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:34.073 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.073 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.073 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.073 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:34.073 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:34.073 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:34.640 nvme0n1 00:21:34.640 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:34.640 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.640 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.640 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.640 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:34.640 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:35.206 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:35.206 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:35.206 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.206 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.206 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:35.206 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.206 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.206 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.206 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:35.206 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:35.464 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:35.464 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.464 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:35.722 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.722 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:35.722 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.722 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.722 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.722 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:35.722 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:35.722 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:35.722 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:35.722 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:35.722 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:35.722 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:35.722 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:35.722 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:35.979 request: 00:21:35.979 { 00:21:35.979 "name": "nvme0", 00:21:35.979 "dhchap_key": "key1", 00:21:35.979 "dhchap_ctrlr_key": "key3", 00:21:35.979 "method": "bdev_nvme_set_keys", 00:21:35.979 "req_id": 1 00:21:35.979 } 00:21:35.979 Got JSON-RPC error response 00:21:35.979 response: 00:21:35.979 { 00:21:35.979 "code": -13, 00:21:35.979 "message": "Permission denied" 00:21:35.979 } 00:21:35.979 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:35.979 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:35.979 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:35.979 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:35.979 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:35.979 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:35.979 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.237 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:36.237 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:37.173 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:37.173 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:37.173 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.433 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:37.433 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:37.433 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.433 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.433 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.433 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:37.433 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:37.433 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:38.369 nvme0n1 00:21:38.369 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:38.369 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.369 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.369 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.369 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:38.369 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:38.369 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:38.369 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:38.369 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:38.369 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:38.369 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:38.369 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:38.369 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:38.627 request: 00:21:38.627 { 00:21:38.627 "name": "nvme0", 00:21:38.627 "dhchap_key": "key2", 00:21:38.627 "dhchap_ctrlr_key": "key0", 00:21:38.627 "method": "bdev_nvme_set_keys", 00:21:38.627 "req_id": 1 00:21:38.627 } 00:21:38.627 Got JSON-RPC error response 00:21:38.627 response: 00:21:38.627 { 00:21:38.627 "code": -13, 00:21:38.627 "message": "Permission denied" 00:21:38.627 } 00:21:38.627 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:38.627 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:38.627 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:38.627 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:38.627 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:38.627 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:38.627 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.886 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:38.886 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:39.820 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:39.820 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:39.820 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.078 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:40.078 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:40.078 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:40.078 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2051719 00:21:40.078 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2051719 ']' 00:21:40.078 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2051719 00:21:40.078 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:40.078 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:40.078 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2051719 00:21:40.078 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:40.078 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:40.078 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2051719' 00:21:40.078 killing process with pid 2051719 00:21:40.078 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2051719 00:21:40.078 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2051719 00:21:40.337 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:40.337 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:40.337 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:40.337 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:40.337 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:40.337 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:40.337 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:40.337 rmmod nvme_tcp 00:21:40.596 rmmod nvme_fabrics 00:21:40.596 rmmod nvme_keyring 00:21:40.596 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:40.596 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:40.596 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:40.596 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 2073139 ']' 00:21:40.596 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 2073139 00:21:40.596 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2073139 ']' 00:21:40.596 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2073139 00:21:40.596 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:40.596 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:40.596 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2073139 00:21:40.596 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:40.596 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:40.596 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2073139' 00:21:40.596 killing process with pid 2073139 00:21:40.596 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2073139 00:21:40.596 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2073139 00:21:40.855 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:40.855 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:40.855 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:40.855 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:40.855 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:21:40.855 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:40.855 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:21:40.855 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:40.855 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:40.855 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.855 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.855 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.758 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:42.758 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.GaU /tmp/spdk.key-sha256.RZo /tmp/spdk.key-sha384.5dl /tmp/spdk.key-sha512.Svg /tmp/spdk.key-sha512.xgX /tmp/spdk.key-sha384.Elr /tmp/spdk.key-sha256.iN9 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:42.758 00:21:42.758 real 2m29.065s 00:21:42.758 user 5m44.774s 00:21:42.758 sys 0m23.197s 00:21:42.758 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:42.758 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.758 ************************************ 00:21:42.758 END TEST nvmf_auth_target 00:21:42.758 ************************************ 00:21:42.758 11:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:42.758 11:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:42.758 11:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:42.758 11:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:42.758 11:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:43.018 ************************************ 00:21:43.018 START TEST nvmf_bdevio_no_huge 00:21:43.018 ************************************ 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:43.018 * Looking for test storage... 00:21:43.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:43.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.018 --rc genhtml_branch_coverage=1 00:21:43.018 --rc genhtml_function_coverage=1 00:21:43.018 --rc genhtml_legend=1 00:21:43.018 --rc geninfo_all_blocks=1 00:21:43.018 --rc geninfo_unexecuted_blocks=1 00:21:43.018 00:21:43.018 ' 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:43.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.018 --rc genhtml_branch_coverage=1 00:21:43.018 --rc genhtml_function_coverage=1 00:21:43.018 --rc genhtml_legend=1 00:21:43.018 --rc geninfo_all_blocks=1 00:21:43.018 --rc geninfo_unexecuted_blocks=1 00:21:43.018 00:21:43.018 ' 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:43.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.018 --rc genhtml_branch_coverage=1 00:21:43.018 --rc genhtml_function_coverage=1 00:21:43.018 --rc genhtml_legend=1 00:21:43.018 --rc geninfo_all_blocks=1 00:21:43.018 --rc geninfo_unexecuted_blocks=1 00:21:43.018 00:21:43.018 ' 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:43.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.018 --rc genhtml_branch_coverage=1 00:21:43.018 --rc genhtml_function_coverage=1 00:21:43.018 --rc genhtml_legend=1 00:21:43.018 --rc geninfo_all_blocks=1 00:21:43.018 --rc geninfo_unexecuted_blocks=1 00:21:43.018 00:21:43.018 ' 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.018 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:43.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:21:43.019 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:48.292 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:48.292 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:48.292 Found net devices under 0000:af:00.0: cvl_0_0 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:48.292 Found net devices under 0000:af:00.1: cvl_0_1 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.292 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:48.293 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:48.293 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.293 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.293 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:48.293 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:48.293 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.293 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:48.552 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:48.552 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:48.552 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:48.552 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.552 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.552 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.552 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:48.552 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:48.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:21:48.552 00:21:48.552 --- 10.0.0.2 ping statistics --- 00:21:48.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.552 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:21:48.552 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:21:48.552 00:21:48.552 --- 10.0.0.1 ping statistics --- 00:21:48.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.552 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:21:48.552 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.552 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:21:48.552 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:48.552 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.552 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:48.552 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:48.552 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.552 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:48.552 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:48.552 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:48.552 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:48.552 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:48.552 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.552 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:48.552 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=2079758 00:21:48.552 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 2079758 00:21:48.552 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 2079758 ']' 00:21:48.552 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.552 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:48.552 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.552 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:48.552 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.552 [2024-10-06 11:16:46.069266] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:21:48.552 [2024-10-06 11:16:46.069308] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:48.552 [2024-10-06 11:16:46.124320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:48.813 [2024-10-06 11:16:46.188861] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.813 [2024-10-06 11:16:46.188896] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.813 [2024-10-06 11:16:46.188903] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.813 [2024-10-06 11:16:46.188909] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.813 [2024-10-06 11:16:46.188914] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.813 [2024-10-06 11:16:46.190004] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:21:48.813 [2024-10-06 11:16:46.190141] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:21:48.813 [2024-10-06 11:16:46.190140] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:48.813 [2024-10-06 11:16:46.190111] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.813 [2024-10-06 11:16:46.330903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.813 Malloc0 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.813 [2024-10-06 11:16:46.375221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:48.813 { 00:21:48.813 "params": { 00:21:48.813 "name": "Nvme$subsystem", 00:21:48.813 "trtype": "$TEST_TRANSPORT", 00:21:48.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.813 "adrfam": "ipv4", 00:21:48.813 "trsvcid": "$NVMF_PORT", 00:21:48.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.813 "hdgst": ${hdgst:-false}, 00:21:48.813 "ddgst": ${ddgst:-false} 00:21:48.813 }, 00:21:48.813 "method": "bdev_nvme_attach_controller" 00:21:48.813 } 00:21:48.813 EOF 00:21:48.813 )") 00:21:48.813 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:21:49.074 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:21:49.074 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:21:49.074 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:49.074 "params": { 00:21:49.074 "name": "Nvme1", 00:21:49.074 "trtype": "tcp", 00:21:49.074 "traddr": "10.0.0.2", 00:21:49.074 "adrfam": "ipv4", 00:21:49.074 "trsvcid": "4420", 00:21:49.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.074 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.074 "hdgst": false, 00:21:49.074 "ddgst": false 00:21:49.074 }, 00:21:49.074 "method": "bdev_nvme_attach_controller" 00:21:49.074 }' 00:21:49.074 [2024-10-06 11:16:46.424378] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:21:49.074 [2024-10-06 11:16:46.424426] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2079884 ] 00:21:49.074 [2024-10-06 11:16:46.481505] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:49.074 [2024-10-06 11:16:46.547125] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.074 [2024-10-06 11:16:46.547228] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.074 [2024-10-06 11:16:46.547229] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.333 I/O targets: 00:21:49.333 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:49.333 00:21:49.333 00:21:49.333 CUnit - A unit testing framework for C - Version 2.1-3 00:21:49.333 http://cunit.sourceforge.net/ 00:21:49.333 00:21:49.333 00:21:49.333 Suite: bdevio tests on: Nvme1n1 00:21:49.592 Test: blockdev write read block ...passed 00:21:49.592 Test: blockdev write zeroes read block ...passed 00:21:49.592 Test: blockdev write zeroes read no split ...passed 00:21:49.592 Test: blockdev write zeroes read split ...passed 00:21:49.592 Test: blockdev write zeroes read split partial ...passed 00:21:49.592 Test: blockdev reset ...[2024-10-06 11:16:47.073471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.592 [2024-10-06 11:16:47.073533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208b400 (9): Bad file descriptor 00:21:49.851 [2024-10-06 11:16:47.184072] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:49.851 passed 00:21:49.851 Test: blockdev write read 8 blocks ...passed 00:21:49.851 Test: blockdev write read size > 128k ...passed 00:21:49.851 Test: blockdev write read invalid size ...passed 00:21:49.851 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:49.851 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:49.851 Test: blockdev write read max offset ...passed 00:21:49.851 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:49.851 Test: blockdev writev readv 8 blocks ...passed 00:21:49.851 Test: blockdev writev readv 30 x 1block ...passed 00:21:49.851 Test: blockdev writev readv block ...passed 00:21:49.851 Test: blockdev writev readv size > 128k ...passed 00:21:49.851 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:49.851 Test: blockdev comparev and writev ...[2024-10-06 11:16:47.355487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.851 [2024-10-06 11:16:47.355520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.851 [2024-10-06 11:16:47.355534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.851 [2024-10-06 11:16:47.355543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.851 [2024-10-06 11:16:47.355833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.851 [2024-10-06 11:16:47.355843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:49.851 [2024-10-06 11:16:47.355855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.851 [2024-10-06 11:16:47.355861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:49.851 [2024-10-06 11:16:47.356143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.851 [2024-10-06 11:16:47.356153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:49.851 [2024-10-06 11:16:47.356164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.851 [2024-10-06 11:16:47.356172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:49.851 [2024-10-06 11:16:47.356486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.851 [2024-10-06 11:16:47.356497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:49.851 [2024-10-06 11:16:47.356509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.851 [2024-10-06 11:16:47.356519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:49.851 passed 00:21:50.110 Test: blockdev nvme passthru rw ...passed 00:21:50.110 Test: blockdev nvme passthru vendor specific ...[2024-10-06 11:16:47.438545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:50.110 [2024-10-06 11:16:47.438563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:50.111 [2024-10-06 11:16:47.438708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:50.111 [2024-10-06 11:16:47.438718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:50.111 [2024-10-06 11:16:47.438868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:50.111 [2024-10-06 11:16:47.438878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:50.111 [2024-10-06 11:16:47.439016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:50.111 [2024-10-06 11:16:47.439026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:50.111 passed 00:21:50.111 Test: blockdev nvme admin passthru ...passed 00:21:50.111 Test: blockdev copy ...passed 00:21:50.111 00:21:50.111 Run Summary: Type Total Ran Passed Failed Inactive 00:21:50.111 suites 1 1 n/a 0 0 00:21:50.111 tests 23 23 23 0 0 00:21:50.111 asserts 152 152 152 0 n/a 00:21:50.111 00:21:50.111 Elapsed time = 1.264 seconds 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:50.370 rmmod nvme_tcp 00:21:50.370 rmmod nvme_fabrics 00:21:50.370 rmmod nvme_keyring 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 2079758 ']' 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 2079758 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 2079758 ']' 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 2079758 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2079758 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2079758' 00:21:50.370 killing process with pid 2079758 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 2079758 00:21:50.370 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 2079758 00:21:50.949 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:50.949 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:50.949 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:50.949 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:21:50.949 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:50.949 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:21:50.949 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:21:50.949 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:50.949 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:50.949 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.949 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.949 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.858 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:52.858 00:21:52.858 real 0m9.933s 00:21:52.858 user 0m11.885s 00:21:52.858 sys 0m5.054s 00:21:52.858 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:52.858 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:52.858 ************************************ 00:21:52.858 END TEST nvmf_bdevio_no_huge 00:21:52.858 ************************************ 00:21:52.858 11:16:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:52.858 11:16:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:52.858 11:16:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:52.858 11:16:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:52.858 ************************************ 00:21:52.858 START TEST nvmf_tls 00:21:52.858 ************************************ 00:21:52.858 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:52.858 * Looking for test storage... 00:21:52.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:52.858 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:53.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.118 --rc genhtml_branch_coverage=1 00:21:53.118 --rc genhtml_function_coverage=1 00:21:53.118 --rc genhtml_legend=1 00:21:53.118 --rc geninfo_all_blocks=1 00:21:53.118 --rc geninfo_unexecuted_blocks=1 00:21:53.118 00:21:53.118 ' 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:53.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.118 --rc genhtml_branch_coverage=1 00:21:53.118 --rc genhtml_function_coverage=1 00:21:53.118 --rc genhtml_legend=1 00:21:53.118 --rc geninfo_all_blocks=1 00:21:53.118 --rc geninfo_unexecuted_blocks=1 00:21:53.118 00:21:53.118 ' 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:53.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.118 --rc genhtml_branch_coverage=1 00:21:53.118 --rc genhtml_function_coverage=1 00:21:53.118 --rc genhtml_legend=1 00:21:53.118 --rc geninfo_all_blocks=1 00:21:53.118 --rc geninfo_unexecuted_blocks=1 00:21:53.118 00:21:53.118 ' 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:53.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.118 --rc genhtml_branch_coverage=1 00:21:53.118 --rc genhtml_function_coverage=1 00:21:53.118 --rc genhtml_legend=1 00:21:53.118 --rc geninfo_all_blocks=1 00:21:53.118 --rc geninfo_unexecuted_blocks=1 00:21:53.118 00:21:53.118 ' 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:21:53.118 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:53.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:21:53.119 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.397 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.397 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:21:58.397 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:58.397 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:58.397 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:58.397 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:58.397 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:58.397 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:21:58.397 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:58.397 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:21:58.397 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:21:58.397 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:58.398 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:58.398 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:58.398 Found net devices under 0000:af:00.0: cvl_0_0 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:58.398 Found net devices under 0000:af:00.1: cvl_0_1 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:58.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:21:58.398 00:21:58.398 --- 10.0.0.2 ping statistics --- 00:21:58.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.398 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:21:58.398 00:21:58.398 --- 10.0.0.1 ping statistics --- 00:21:58.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.398 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2083375 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2083375 00:21:58.398 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:58.399 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2083375 ']' 00:21:58.399 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.399 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:58.399 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.399 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:58.399 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.399 [2024-10-06 11:16:55.656551] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:21:58.399 [2024-10-06 11:16:55.656598] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.399 [2024-10-06 11:16:55.714980] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.399 [2024-10-06 11:16:55.752874] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.399 [2024-10-06 11:16:55.752915] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.399 [2024-10-06 11:16:55.752922] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.399 [2024-10-06 11:16:55.752929] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.399 [2024-10-06 11:16:55.752934] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.399 [2024-10-06 11:16:55.753464] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.399 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:58.399 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:58.399 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:58.399 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:58.399 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.399 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.399 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:21:58.399 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:58.658 true 00:21:58.658 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:58.658 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:21:58.658 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:21:58.658 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:21:58.658 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:58.917 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:58.917 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:21:59.177 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:21:59.177 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:21:59.177 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:59.436 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:59.436 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:21:59.436 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:21:59.436 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:21:59.436 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:59.436 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:21:59.705 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:21:59.705 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:21:59.705 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:59.968 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:59.968 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:59.968 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:21:59.968 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:21:59.968 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:00.227 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:00.227 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.cBMCwkuZro 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.EGSEq0Dgik 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.cBMCwkuZro 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.EGSEq0Dgik 00:22:00.486 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:00.746 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:01.007 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.cBMCwkuZro 00:22:01.007 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cBMCwkuZro 00:22:01.007 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:01.266 [2024-10-06 11:16:58.598266] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.266 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:01.266 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:01.525 [2024-10-06 11:16:58.963196] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:01.525 [2024-10-06 11:16:58.963416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.525 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:01.784 malloc0 00:22:01.784 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:02.043 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cBMCwkuZro 00:22:02.043 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:02.302 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.cBMCwkuZro 00:22:12.283 Initializing NVMe Controllers 00:22:12.283 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:12.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:12.283 Initialization complete. Launching workers. 00:22:12.283 ======================================================== 00:22:12.283 Latency(us) 00:22:12.283 Device Information : IOPS MiB/s Average min max 00:22:12.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16873.09 65.91 3793.14 776.19 5288.10 00:22:12.283 ======================================================== 00:22:12.283 Total : 16873.09 65.91 3793.14 776.19 5288.10 00:22:12.283 00:22:12.283 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cBMCwkuZro 00:22:12.283 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:12.283 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:12.283 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:12.283 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cBMCwkuZro 00:22:12.283 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:12.283 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2085837 00:22:12.283 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:12.283 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2085837 /var/tmp/bdevperf.sock 00:22:12.283 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2085837 ']' 00:22:12.283 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.283 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:12.283 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:12.283 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.283 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:12.283 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.543 [2024-10-06 11:17:09.869318] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:22:12.543 [2024-10-06 11:17:09.869366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085837 ] 00:22:12.543 [2024-10-06 11:17:09.918103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.543 [2024-10-06 11:17:09.958452] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.543 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:12.543 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:12.543 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cBMCwkuZro 00:22:12.802 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:13.061 [2024-10-06 11:17:10.401615] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:13.061 TLSTESTn1 00:22:13.061 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:13.061 Running I/O for 10 seconds... 00:22:23.471 5466.00 IOPS, 21.35 MiB/s 5625.50 IOPS, 21.97 MiB/s 5623.00 IOPS, 21.96 MiB/s 5494.00 IOPS, 21.46 MiB/s 4912.80 IOPS, 19.19 MiB/s 4502.67 IOPS, 17.59 MiB/s 4237.86 IOPS, 16.55 MiB/s 4037.00 IOPS, 15.77 MiB/s 3889.44 IOPS, 15.19 MiB/s 3754.10 IOPS, 14.66 MiB/s 00:22:23.471 Latency(us) 00:22:23.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.471 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:23.471 Verification LBA range: start 0x0 length 0x2000 00:22:23.471 TLSTESTn1 : 10.03 3757.15 14.68 0.00 0.00 34010.47 5742.20 68407.10 00:22:23.471 =================================================================================================================== 00:22:23.471 Total : 3757.15 14.68 0.00 0.00 34010.47 5742.20 68407.10 00:22:23.471 { 00:22:23.471 "results": [ 00:22:23.471 { 00:22:23.471 "job": "TLSTESTn1", 00:22:23.471 "core_mask": "0x4", 00:22:23.472 "workload": "verify", 00:22:23.472 "status": "finished", 00:22:23.472 "verify_range": { 00:22:23.472 "start": 0, 00:22:23.472 "length": 8192 00:22:23.472 }, 00:22:23.472 "queue_depth": 128, 00:22:23.472 "io_size": 4096, 00:22:23.472 "runtime": 10.025947, 00:22:23.472 "iops": 3757.1513194713675, 00:22:23.472 "mibps": 14.67637234168503, 00:22:23.472 "io_failed": 0, 00:22:23.472 "io_timeout": 0, 00:22:23.472 "avg_latency_us": 34010.466537066604, 00:22:23.472 "min_latency_us": 5742.201904761905, 00:22:23.472 "max_latency_us": 68407.10095238095 00:22:23.472 } 00:22:23.472 ], 00:22:23.472 "core_count": 1 00:22:23.472 } 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2085837 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2085837 ']' 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2085837 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2085837 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2085837' 00:22:23.472 killing process with pid 2085837 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2085837 00:22:23.472 Received shutdown signal, test time was about 10.000000 seconds 00:22:23.472 00:22:23.472 Latency(us) 00:22:23.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.472 =================================================================================================================== 00:22:23.472 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2085837 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EGSEq0Dgik 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EGSEq0Dgik 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EGSEq0Dgik 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EGSEq0Dgik 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2087539 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2087539 /var/tmp/bdevperf.sock 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2087539 ']' 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:23.472 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.472 [2024-10-06 11:17:20.947266] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:22:23.472 [2024-10-06 11:17:20.947315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087539 ] 00:22:23.472 [2024-10-06 11:17:20.996314] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.730 [2024-10-06 11:17:21.036979] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.730 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:23.730 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:23.730 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EGSEq0Dgik 00:22:23.730 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:23.989 [2024-10-06 11:17:21.471066] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:23.989 [2024-10-06 11:17:21.476380] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:23.989 [2024-10-06 11:17:21.477327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d3590 (107): Transport endpoint is not connected 00:22:23.989 [2024-10-06 11:17:21.478321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d3590 (9): Bad file descriptor 00:22:23.989 [2024-10-06 11:17:21.479322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:23.989 [2024-10-06 11:17:21.479338] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:23.989 [2024-10-06 11:17:21.479346] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:23.989 [2024-10-06 11:17:21.479358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:23.989 request: 00:22:23.989 { 00:22:23.989 "name": "TLSTEST", 00:22:23.989 "trtype": "tcp", 00:22:23.989 "traddr": "10.0.0.2", 00:22:23.989 "adrfam": "ipv4", 00:22:23.989 "trsvcid": "4420", 00:22:23.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:23.989 "prchk_reftag": false, 00:22:23.989 "prchk_guard": false, 00:22:23.989 "hdgst": false, 00:22:23.989 "ddgst": false, 00:22:23.989 "psk": "key0", 00:22:23.989 "allow_unrecognized_csi": false, 00:22:23.989 "method": "bdev_nvme_attach_controller", 00:22:23.989 "req_id": 1 00:22:23.989 } 00:22:23.989 Got JSON-RPC error response 00:22:23.989 response: 00:22:23.989 { 00:22:23.989 "code": -5, 00:22:23.989 "message": "Input/output error" 00:22:23.989 } 00:22:23.989 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2087539 00:22:23.989 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2087539 ']' 00:22:23.989 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2087539 00:22:23.989 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:23.989 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:23.989 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2087539 00:22:23.989 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:23.989 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:23.989 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2087539' 00:22:23.989 killing process with pid 2087539 00:22:23.989 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2087539 00:22:23.989 Received shutdown signal, test time was about 4.392924 seconds 00:22:23.989 00:22:23.989 Latency(us) 00:22:23.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.990 =================================================================================================================== 00:22:23.990 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:23.990 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2087539 00:22:24.248 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:24.248 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:24.248 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:24.248 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:24.248 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:24.248 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cBMCwkuZro 00:22:24.248 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:24.248 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cBMCwkuZro 00:22:24.248 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:24.248 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.248 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:24.248 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.248 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cBMCwkuZro 00:22:24.248 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:24.248 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:24.248 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:24.249 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cBMCwkuZro 00:22:24.249 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:24.249 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2087659 00:22:24.249 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:24.249 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:24.249 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2087659 /var/tmp/bdevperf.sock 00:22:24.249 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2087659 ']' 00:22:24.249 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:24.249 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:24.249 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:24.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:24.249 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:24.249 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.249 [2024-10-06 11:17:21.761044] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:22:24.249 [2024-10-06 11:17:21.761095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087659 ] 00:22:24.249 [2024-10-06 11:17:21.809094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.507 [2024-10-06 11:17:21.847110] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.507 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:24.507 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:24.507 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cBMCwkuZro 00:22:24.767 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:24.767 [2024-10-06 11:17:22.308355] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:24.767 [2024-10-06 11:17:22.318407] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:24.767 [2024-10-06 11:17:22.318431] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:24.767 [2024-10-06 11:17:22.318452] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:24.767 [2024-10-06 11:17:22.318651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0d590 (107): Transport endpoint is not connected 00:22:24.767 [2024-10-06 11:17:22.319644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0d590 (9): Bad file descriptor 00:22:24.767 [2024-10-06 11:17:22.320646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:24.767 [2024-10-06 11:17:22.320658] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:24.767 [2024-10-06 11:17:22.320665] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:24.767 [2024-10-06 11:17:22.320675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:24.767 request: 00:22:24.767 { 00:22:24.767 "name": "TLSTEST", 00:22:24.767 "trtype": "tcp", 00:22:24.767 "traddr": "10.0.0.2", 00:22:24.767 "adrfam": "ipv4", 00:22:24.767 "trsvcid": "4420", 00:22:24.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.767 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:24.767 "prchk_reftag": false, 00:22:24.767 "prchk_guard": false, 00:22:24.767 "hdgst": false, 00:22:24.767 "ddgst": false, 00:22:24.767 "psk": "key0", 00:22:24.767 "allow_unrecognized_csi": false, 00:22:24.767 "method": "bdev_nvme_attach_controller", 00:22:24.767 "req_id": 1 00:22:24.767 } 00:22:24.767 Got JSON-RPC error response 00:22:24.767 response: 00:22:24.767 { 00:22:24.767 "code": -5, 00:22:24.767 "message": "Input/output error" 00:22:24.767 } 00:22:24.767 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2087659 00:22:24.767 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2087659 ']' 00:22:24.767 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2087659 00:22:24.767 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2087659 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2087659' 00:22:25.026 killing process with pid 2087659 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2087659 00:22:25.026 Received shutdown signal, test time was about 5.236397 seconds 00:22:25.026 00:22:25.026 Latency(us) 00:22:25.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.026 =================================================================================================================== 00:22:25.026 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2087659 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cBMCwkuZro 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cBMCwkuZro 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cBMCwkuZro 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cBMCwkuZro 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2087886 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2087886 /var/tmp/bdevperf.sock 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2087886 ']' 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:25.026 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.285 [2024-10-06 11:17:22.612612] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:22:25.285 [2024-10-06 11:17:22.612661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087886 ] 00:22:25.285 [2024-10-06 11:17:22.661606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.285 [2024-10-06 11:17:22.701508] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.285 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:25.285 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:25.285 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cBMCwkuZro 00:22:25.543 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:25.802 [2024-10-06 11:17:23.134393] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:25.802 [2024-10-06 11:17:23.140507] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:25.802 [2024-10-06 11:17:23.140529] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:25.802 [2024-10-06 11:17:23.140552] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:25.802 [2024-10-06 11:17:23.140629] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:25.802 [2024-10-06 11:17:23.141606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2281590 (9): Bad file descriptor 00:22:25.802 [2024-10-06 11:17:23.142607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:25.802 [2024-10-06 11:17:23.142621] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:25.802 [2024-10-06 11:17:23.142629] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:25.802 [2024-10-06 11:17:23.142639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:25.802 request: 00:22:25.802 { 00:22:25.802 "name": "TLSTEST", 00:22:25.802 "trtype": "tcp", 00:22:25.802 "traddr": "10.0.0.2", 00:22:25.802 "adrfam": "ipv4", 00:22:25.802 "trsvcid": "4420", 00:22:25.802 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:25.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:25.802 "prchk_reftag": false, 00:22:25.802 "prchk_guard": false, 00:22:25.802 "hdgst": false, 00:22:25.802 "ddgst": false, 00:22:25.802 "psk": "key0", 00:22:25.802 "allow_unrecognized_csi": false, 00:22:25.802 "method": "bdev_nvme_attach_controller", 00:22:25.802 "req_id": 1 00:22:25.802 } 00:22:25.802 Got JSON-RPC error response 00:22:25.802 response: 00:22:25.802 { 00:22:25.802 "code": -5, 00:22:25.802 "message": "Input/output error" 00:22:25.802 } 00:22:25.802 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2087886 00:22:25.802 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2087886 ']' 00:22:25.802 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2087886 00:22:25.802 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:25.802 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:25.802 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2087886 00:22:25.802 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:25.802 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:25.802 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2087886' 00:22:25.802 killing process with pid 2087886 00:22:25.802 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2087886 00:22:25.802 Received shutdown signal, test time was about 6.066481 seconds 00:22:25.802 00:22:25.802 Latency(us) 00:22:25.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.802 =================================================================================================================== 00:22:25.802 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:25.802 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2087886 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2087904 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2087904 /var/tmp/bdevperf.sock 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2087904 ']' 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.062 [2024-10-06 11:17:23.438850] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:22:26.062 [2024-10-06 11:17:23.438896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087904 ] 00:22:26.062 [2024-10-06 11:17:23.486861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.062 [2024-10-06 11:17:23.525114] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:26.062 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:26.321 [2024-10-06 11:17:23.781638] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:26.321 [2024-10-06 11:17:23.781665] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:26.321 request: 00:22:26.321 { 00:22:26.321 "name": "key0", 00:22:26.321 "path": "", 00:22:26.321 "method": "keyring_file_add_key", 00:22:26.321 "req_id": 1 00:22:26.321 } 00:22:26.321 Got JSON-RPC error response 00:22:26.321 response: 00:22:26.321 { 00:22:26.321 "code": -1, 00:22:26.321 "message": "Operation not permitted" 00:22:26.321 } 00:22:26.321 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:26.580 [2024-10-06 11:17:23.954169] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:26.580 [2024-10-06 11:17:23.954195] bdev_nvme.c:6412:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:26.580 request: 00:22:26.580 { 00:22:26.580 "name": "TLSTEST", 00:22:26.580 "trtype": "tcp", 00:22:26.580 "traddr": "10.0.0.2", 00:22:26.580 "adrfam": "ipv4", 00:22:26.580 "trsvcid": "4420", 00:22:26.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:26.580 "prchk_reftag": false, 00:22:26.580 "prchk_guard": false, 00:22:26.580 "hdgst": false, 00:22:26.580 "ddgst": false, 00:22:26.580 "psk": "key0", 00:22:26.580 "allow_unrecognized_csi": false, 00:22:26.580 "method": "bdev_nvme_attach_controller", 00:22:26.580 "req_id": 1 00:22:26.580 } 00:22:26.580 Got JSON-RPC error response 00:22:26.580 response: 00:22:26.580 { 00:22:26.580 "code": -126, 00:22:26.580 "message": "Required key not available" 00:22:26.580 } 00:22:26.580 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2087904 00:22:26.580 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2087904 ']' 00:22:26.580 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2087904 00:22:26.580 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:26.580 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.580 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2087904 00:22:26.580 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:26.580 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:26.580 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2087904' 00:22:26.580 killing process with pid 2087904 00:22:26.580 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2087904 00:22:26.580 Received shutdown signal, test time was about 6.861852 seconds 00:22:26.580 00:22:26.580 Latency(us) 00:22:26.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.580 =================================================================================================================== 00:22:26.580 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:26.580 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2087904 00:22:26.839 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:26.839 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:26.839 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:26.839 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:26.839 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:26.839 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2083375 00:22:26.839 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2083375 ']' 00:22:26.839 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2083375 00:22:26.839 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:26.839 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.839 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2083375 00:22:26.839 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:26.839 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:26.839 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2083375' 00:22:26.839 killing process with pid 2083375 00:22:26.839 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2083375 00:22:26.839 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2083375 00:22:26.839 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:27.098 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:27.098 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:22:27.098 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:27.098 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:27.098 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:22:27.098 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:22:27.098 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:27.098 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:27.098 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.WvEZEFM6mW 00:22:27.098 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:27.098 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.WvEZEFM6mW 00:22:27.098 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:27.098 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:27.098 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:27.099 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.099 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2088144 00:22:27.099 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:27.099 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2088144 00:22:27.099 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2088144 ']' 00:22:27.099 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.099 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:27.099 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.099 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:27.099 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.099 [2024-10-06 11:17:24.513908] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:22:27.099 [2024-10-06 11:17:24.513953] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.099 [2024-10-06 11:17:24.571633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.099 [2024-10-06 11:17:24.609460] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.099 [2024-10-06 11:17:24.609500] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.099 [2024-10-06 11:17:24.609507] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.099 [2024-10-06 11:17:24.609513] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.099 [2024-10-06 11:17:24.609518] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.099 [2024-10-06 11:17:24.610039] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.358 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.358 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:27.358 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:27.358 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:27.358 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.358 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.358 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.WvEZEFM6mW 00:22:27.358 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.WvEZEFM6mW 00:22:27.358 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:27.358 [2024-10-06 11:17:24.898433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.358 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:27.618 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:27.879 [2024-10-06 11:17:25.279398] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:27.879 [2024-10-06 11:17:25.279600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.879 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:28.138 malloc0 00:22:28.138 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:28.139 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.WvEZEFM6mW 00:22:28.399 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:28.659 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WvEZEFM6mW 00:22:28.659 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:28.659 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:28.659 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:28.659 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WvEZEFM6mW 00:22:28.659 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:28.659 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:28.659 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2088392 00:22:28.659 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:28.659 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2088392 /var/tmp/bdevperf.sock 00:22:28.659 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2088392 ']' 00:22:28.659 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.659 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:28.659 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.659 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:28.659 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.659 [2024-10-06 11:17:26.076790] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:22:28.659 [2024-10-06 11:17:26.076838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088392 ] 00:22:28.659 [2024-10-06 11:17:26.126576] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.659 [2024-10-06 11:17:26.165190] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.918 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:28.918 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:28.918 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WvEZEFM6mW 00:22:28.919 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:29.178 [2024-10-06 11:17:26.602225] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:29.178 TLSTESTn1 00:22:29.178 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:29.437 Running I/O for 10 seconds... 00:22:39.499 5133.00 IOPS, 20.05 MiB/s 5520.50 IOPS, 21.56 MiB/s 5567.00 IOPS, 21.75 MiB/s 5629.50 IOPS, 21.99 MiB/s 5645.60 IOPS, 22.05 MiB/s 5548.33 IOPS, 21.67 MiB/s 5587.14 IOPS, 21.82 MiB/s 5622.50 IOPS, 21.96 MiB/s 5657.22 IOPS, 22.10 MiB/s 5628.10 IOPS, 21.98 MiB/s 00:22:39.499 Latency(us) 00:22:39.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.499 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:39.499 Verification LBA range: start 0x0 length 0x2000 00:22:39.499 TLSTESTn1 : 10.03 5625.16 21.97 0.00 0.00 22709.79 4493.90 41943.04 00:22:39.499 =================================================================================================================== 00:22:39.499 Total : 5625.16 21.97 0.00 0.00 22709.79 4493.90 41943.04 00:22:39.499 { 00:22:39.499 "results": [ 00:22:39.499 { 00:22:39.499 "job": "TLSTESTn1", 00:22:39.499 "core_mask": "0x4", 00:22:39.499 "workload": "verify", 00:22:39.499 "status": "finished", 00:22:39.499 "verify_range": { 00:22:39.499 "start": 0, 00:22:39.499 "length": 8192 00:22:39.499 }, 00:22:39.499 "queue_depth": 128, 00:22:39.499 "io_size": 4096, 00:22:39.499 "runtime": 10.027984, 00:22:39.499 "iops": 5625.158556296061, 00:22:39.499 "mibps": 21.973275610531488, 00:22:39.499 "io_failed": 0, 00:22:39.499 "io_timeout": 0, 00:22:39.499 "avg_latency_us": 22709.79444988937, 00:22:39.499 "min_latency_us": 4493.897142857143, 00:22:39.499 "max_latency_us": 41943.04 00:22:39.499 } 00:22:39.499 ], 00:22:39.499 "core_count": 1 00:22:39.499 } 00:22:39.499 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:39.499 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2088392 00:22:39.499 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2088392 ']' 00:22:39.499 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2088392 00:22:39.499 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:39.499 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:39.499 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2088392 00:22:39.499 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:39.499 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:39.499 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2088392' 00:22:39.499 killing process with pid 2088392 00:22:39.499 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2088392 00:22:39.499 Received shutdown signal, test time was about 10.000000 seconds 00:22:39.499 00:22:39.499 Latency(us) 00:22:39.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.499 =================================================================================================================== 00:22:39.499 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.499 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2088392 00:22:39.499 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.WvEZEFM6mW 00:22:39.499 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WvEZEFM6mW 00:22:39.499 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:39.499 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WvEZEFM6mW 00:22:39.499 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:39.499 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.499 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:39.499 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.499 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WvEZEFM6mW 00:22:39.499 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:39.499 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:39.499 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:39.499 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WvEZEFM6mW 00:22:39.499 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:39.759 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2090175 00:22:39.759 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:39.759 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:39.759 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2090175 /var/tmp/bdevperf.sock 00:22:39.759 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2090175 ']' 00:22:39.759 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:39.759 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:39.759 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:39.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:39.759 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:39.759 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.759 [2024-10-06 11:17:37.117350] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:22:39.759 [2024-10-06 11:17:37.117396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2090175 ] 00:22:39.759 [2024-10-06 11:17:37.167103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.759 [2024-10-06 11:17:37.202390] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.759 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:39.759 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:39.759 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WvEZEFM6mW 00:22:40.018 [2024-10-06 11:17:37.458906] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.WvEZEFM6mW': 0100666 00:22:40.018 [2024-10-06 11:17:37.458937] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:40.018 request: 00:22:40.018 { 00:22:40.018 "name": "key0", 00:22:40.018 "path": "/tmp/tmp.WvEZEFM6mW", 00:22:40.018 "method": "keyring_file_add_key", 00:22:40.018 "req_id": 1 00:22:40.018 } 00:22:40.018 Got JSON-RPC error response 00:22:40.018 response: 00:22:40.018 { 00:22:40.018 "code": -1, 00:22:40.018 "message": "Operation not permitted" 00:22:40.018 } 00:22:40.018 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:40.277 [2024-10-06 11:17:37.663514] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:40.277 [2024-10-06 11:17:37.663540] bdev_nvme.c:6412:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:40.277 request: 00:22:40.277 { 00:22:40.277 "name": "TLSTEST", 00:22:40.277 "trtype": "tcp", 00:22:40.277 "traddr": "10.0.0.2", 00:22:40.277 "adrfam": "ipv4", 00:22:40.277 "trsvcid": "4420", 00:22:40.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.277 "prchk_reftag": false, 00:22:40.277 "prchk_guard": false, 00:22:40.277 "hdgst": false, 00:22:40.277 "ddgst": false, 00:22:40.277 "psk": "key0", 00:22:40.277 "allow_unrecognized_csi": false, 00:22:40.277 "method": "bdev_nvme_attach_controller", 00:22:40.277 "req_id": 1 00:22:40.277 } 00:22:40.277 Got JSON-RPC error response 00:22:40.277 response: 00:22:40.277 { 00:22:40.277 "code": -126, 00:22:40.277 "message": "Required key not available" 00:22:40.277 } 00:22:40.277 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2090175 00:22:40.277 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2090175 ']' 00:22:40.277 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2090175 00:22:40.277 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:40.277 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:40.277 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2090175 00:22:40.277 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:40.277 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:40.277 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2090175' 00:22:40.277 killing process with pid 2090175 00:22:40.277 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2090175 00:22:40.277 Received shutdown signal, test time was about 10.000000 seconds 00:22:40.277 00:22:40.277 Latency(us) 00:22:40.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.277 =================================================================================================================== 00:22:40.278 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:40.278 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2090175 00:22:40.537 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:40.537 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:40.537 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:40.537 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:40.537 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:40.537 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2088144 00:22:40.537 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2088144 ']' 00:22:40.537 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2088144 00:22:40.537 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:40.537 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:40.537 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2088144 00:22:40.537 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:40.537 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:40.537 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2088144' 00:22:40.537 killing process with pid 2088144 00:22:40.537 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2088144 00:22:40.537 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2088144 00:22:40.797 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:40.797 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:40.797 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:40.797 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.797 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2090410 00:22:40.797 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2090410 00:22:40.797 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:40.797 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2090410 ']' 00:22:40.797 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.797 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:40.797 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.797 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:40.797 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.797 [2024-10-06 11:17:38.188860] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:22:40.797 [2024-10-06 11:17:38.188905] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.797 [2024-10-06 11:17:38.246990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.797 [2024-10-06 11:17:38.284198] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.797 [2024-10-06 11:17:38.284239] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.797 [2024-10-06 11:17:38.284245] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.797 [2024-10-06 11:17:38.284252] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.797 [2024-10-06 11:17:38.284257] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.797 [2024-10-06 11:17:38.284786] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.797 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:40.797 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:40.797 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:40.797 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:40.797 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.055 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.055 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.WvEZEFM6mW 00:22:41.055 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:41.055 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.WvEZEFM6mW 00:22:41.055 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:22:41.055 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.055 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:22:41.055 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.055 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.WvEZEFM6mW 00:22:41.055 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.WvEZEFM6mW 00:22:41.055 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:41.055 [2024-10-06 11:17:38.581583] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.055 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:41.315 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:41.574 [2024-10-06 11:17:38.954563] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:41.574 [2024-10-06 11:17:38.954781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.575 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:41.834 malloc0 00:22:41.834 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:41.834 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.WvEZEFM6mW 00:22:42.094 [2024-10-06 11:17:39.526314] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.WvEZEFM6mW': 0100666 00:22:42.094 [2024-10-06 11:17:39.526340] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:42.094 request: 00:22:42.094 { 00:22:42.094 "name": "key0", 00:22:42.094 "path": "/tmp/tmp.WvEZEFM6mW", 00:22:42.094 "method": "keyring_file_add_key", 00:22:42.094 "req_id": 1 00:22:42.094 } 00:22:42.094 Got JSON-RPC error response 00:22:42.094 response: 00:22:42.094 { 00:22:42.094 "code": -1, 00:22:42.094 "message": "Operation not permitted" 00:22:42.094 } 00:22:42.094 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:42.353 [2024-10-06 11:17:39.702788] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:42.354 [2024-10-06 11:17:39.702819] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:42.354 request: 00:22:42.354 { 00:22:42.354 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.354 "host": "nqn.2016-06.io.spdk:host1", 00:22:42.354 "psk": "key0", 00:22:42.354 "method": "nvmf_subsystem_add_host", 00:22:42.354 "req_id": 1 00:22:42.354 } 00:22:42.354 Got JSON-RPC error response 00:22:42.354 response: 00:22:42.354 { 00:22:42.354 "code": -32603, 00:22:42.354 "message": "Internal error" 00:22:42.354 } 00:22:42.354 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:42.354 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:42.354 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:42.354 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:42.354 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2090410 00:22:42.354 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2090410 ']' 00:22:42.354 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2090410 00:22:42.354 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:42.354 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:42.354 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2090410 00:22:42.354 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:42.354 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:42.354 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2090410' 00:22:42.354 killing process with pid 2090410 00:22:42.354 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2090410 00:22:42.354 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2090410 00:22:42.614 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.WvEZEFM6mW 00:22:42.614 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:22:42.614 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:42.614 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.614 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.614 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2090675 00:22:42.614 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:42.614 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2090675 00:22:42.614 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2090675 ']' 00:22:42.614 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.614 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:42.614 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.614 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:42.614 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.614 [2024-10-06 11:17:40.005285] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:22:42.614 [2024-10-06 11:17:40.005329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.614 [2024-10-06 11:17:40.066087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.614 [2024-10-06 11:17:40.107000] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.614 [2024-10-06 11:17:40.107040] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.614 [2024-10-06 11:17:40.107047] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.614 [2024-10-06 11:17:40.107053] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.614 [2024-10-06 11:17:40.107062] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.614 [2024-10-06 11:17:40.107581] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.873 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:42.873 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:42.873 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:42.873 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:42.873 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.874 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.874 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.WvEZEFM6mW 00:22:42.874 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.WvEZEFM6mW 00:22:42.874 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:42.874 [2024-10-06 11:17:40.404747] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.874 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:43.133 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:43.392 [2024-10-06 11:17:40.785715] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:43.392 [2024-10-06 11:17:40.785907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.392 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:43.652 malloc0 00:22:43.652 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:43.652 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.WvEZEFM6mW 00:22:43.912 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:44.171 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2090933 00:22:44.171 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:44.171 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:44.171 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2090933 /var/tmp/bdevperf.sock 00:22:44.171 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2090933 ']' 00:22:44.171 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:44.171 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:44.171 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:44.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:44.171 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:44.171 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.171 [2024-10-06 11:17:41.596324] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:22:44.171 [2024-10-06 11:17:41.596375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2090933 ] 00:22:44.171 [2024-10-06 11:17:41.650259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.171 [2024-10-06 11:17:41.689126] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.459 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:44.459 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:44.459 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WvEZEFM6mW 00:22:44.459 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:44.718 [2024-10-06 11:17:42.146235] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:44.718 TLSTESTn1 00:22:44.718 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:44.978 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:22:44.978 "subsystems": [ 00:22:44.978 { 00:22:44.978 "subsystem": "keyring", 00:22:44.978 "config": [ 00:22:44.978 { 00:22:44.978 "method": "keyring_file_add_key", 00:22:44.978 "params": { 00:22:44.978 "name": "key0", 00:22:44.978 "path": "/tmp/tmp.WvEZEFM6mW" 00:22:44.978 } 00:22:44.978 } 00:22:44.978 ] 00:22:44.978 }, 00:22:44.978 { 00:22:44.978 "subsystem": "iobuf", 00:22:44.978 "config": [ 00:22:44.978 { 00:22:44.978 "method": "iobuf_set_options", 00:22:44.978 "params": { 00:22:44.978 "small_pool_count": 8192, 00:22:44.978 "large_pool_count": 1024, 00:22:44.978 "small_bufsize": 8192, 00:22:44.978 "large_bufsize": 135168 00:22:44.978 } 00:22:44.978 } 00:22:44.978 ] 00:22:44.978 }, 00:22:44.978 { 00:22:44.978 "subsystem": "sock", 00:22:44.978 "config": [ 00:22:44.978 { 00:22:44.978 "method": "sock_set_default_impl", 00:22:44.978 "params": { 00:22:44.978 "impl_name": "posix" 00:22:44.978 } 00:22:44.978 }, 00:22:44.978 { 00:22:44.978 "method": "sock_impl_set_options", 00:22:44.978 "params": { 00:22:44.978 "impl_name": "ssl", 00:22:44.978 "recv_buf_size": 4096, 00:22:44.978 "send_buf_size": 4096, 00:22:44.978 "enable_recv_pipe": true, 00:22:44.978 "enable_quickack": false, 00:22:44.978 "enable_placement_id": 0, 00:22:44.978 "enable_zerocopy_send_server": true, 00:22:44.978 "enable_zerocopy_send_client": false, 00:22:44.978 "zerocopy_threshold": 0, 00:22:44.978 "tls_version": 0, 00:22:44.978 "enable_ktls": false 00:22:44.978 } 00:22:44.978 }, 00:22:44.978 { 00:22:44.978 "method": "sock_impl_set_options", 00:22:44.978 "params": { 00:22:44.978 "impl_name": "posix", 00:22:44.978 "recv_buf_size": 2097152, 00:22:44.978 "send_buf_size": 2097152, 00:22:44.978 "enable_recv_pipe": true, 00:22:44.978 "enable_quickack": false, 00:22:44.978 "enable_placement_id": 0, 00:22:44.978 "enable_zerocopy_send_server": true, 00:22:44.978 "enable_zerocopy_send_client": false, 00:22:44.978 "zerocopy_threshold": 0, 00:22:44.978 "tls_version": 0, 00:22:44.978 "enable_ktls": false 00:22:44.978 } 00:22:44.978 } 00:22:44.978 ] 00:22:44.978 }, 00:22:44.978 { 00:22:44.978 "subsystem": "vmd", 00:22:44.978 "config": [] 00:22:44.978 }, 00:22:44.978 { 00:22:44.978 "subsystem": "accel", 00:22:44.978 "config": [ 00:22:44.978 { 00:22:44.978 "method": "accel_set_options", 00:22:44.978 "params": { 00:22:44.978 "small_cache_size": 128, 00:22:44.978 "large_cache_size": 16, 00:22:44.978 "task_count": 2048, 00:22:44.978 "sequence_count": 2048, 00:22:44.978 "buf_count": 2048 00:22:44.978 } 00:22:44.978 } 00:22:44.978 ] 00:22:44.978 }, 00:22:44.978 { 00:22:44.978 "subsystem": "bdev", 00:22:44.978 "config": [ 00:22:44.978 { 00:22:44.978 "method": "bdev_set_options", 00:22:44.978 "params": { 00:22:44.978 "bdev_io_pool_size": 65535, 00:22:44.978 "bdev_io_cache_size": 256, 00:22:44.978 "bdev_auto_examine": true, 00:22:44.978 "iobuf_small_cache_size": 128, 00:22:44.978 "iobuf_large_cache_size": 16 00:22:44.978 } 00:22:44.978 }, 00:22:44.978 { 00:22:44.978 "method": "bdev_raid_set_options", 00:22:44.978 "params": { 00:22:44.978 "process_window_size_kb": 1024, 00:22:44.978 "process_max_bandwidth_mb_sec": 0 00:22:44.978 } 00:22:44.978 }, 00:22:44.978 { 00:22:44.978 "method": "bdev_iscsi_set_options", 00:22:44.978 "params": { 00:22:44.978 "timeout_sec": 30 00:22:44.978 } 00:22:44.978 }, 00:22:44.978 { 00:22:44.978 "method": "bdev_nvme_set_options", 00:22:44.978 "params": { 00:22:44.978 "action_on_timeout": "none", 00:22:44.978 "timeout_us": 0, 00:22:44.978 "timeout_admin_us": 0, 00:22:44.978 "keep_alive_timeout_ms": 10000, 00:22:44.978 "arbitration_burst": 0, 00:22:44.978 "low_priority_weight": 0, 00:22:44.978 "medium_priority_weight": 0, 00:22:44.978 "high_priority_weight": 0, 00:22:44.978 "nvme_adminq_poll_period_us": 10000, 00:22:44.978 "nvme_ioq_poll_period_us": 0, 00:22:44.978 "io_queue_requests": 0, 00:22:44.978 "delay_cmd_submit": true, 00:22:44.978 "transport_retry_count": 4, 00:22:44.978 "bdev_retry_count": 3, 00:22:44.978 "transport_ack_timeout": 0, 00:22:44.978 "ctrlr_loss_timeout_sec": 0, 00:22:44.978 "reconnect_delay_sec": 0, 00:22:44.978 "fast_io_fail_timeout_sec": 0, 00:22:44.978 "disable_auto_failback": false, 00:22:44.978 "generate_uuids": false, 00:22:44.978 "transport_tos": 0, 00:22:44.978 "nvme_error_stat": false, 00:22:44.978 "rdma_srq_size": 0, 00:22:44.978 "io_path_stat": false, 00:22:44.978 "allow_accel_sequence": false, 00:22:44.978 "rdma_max_cq_size": 0, 00:22:44.978 "rdma_cm_event_timeout_ms": 0, 00:22:44.978 "dhchap_digests": [ 00:22:44.978 "sha256", 00:22:44.978 "sha384", 00:22:44.978 "sha512" 00:22:44.978 ], 00:22:44.979 "dhchap_dhgroups": [ 00:22:44.979 "null", 00:22:44.979 "ffdhe2048", 00:22:44.979 "ffdhe3072", 00:22:44.979 "ffdhe4096", 00:22:44.979 "ffdhe6144", 00:22:44.979 "ffdhe8192" 00:22:44.979 ] 00:22:44.979 } 00:22:44.979 }, 00:22:44.979 { 00:22:44.979 "method": "bdev_nvme_set_hotplug", 00:22:44.979 "params": { 00:22:44.979 "period_us": 100000, 00:22:44.979 "enable": false 00:22:44.979 } 00:22:44.979 }, 00:22:44.979 { 00:22:44.979 "method": "bdev_malloc_create", 00:22:44.979 "params": { 00:22:44.979 "name": "malloc0", 00:22:44.979 "num_blocks": 8192, 00:22:44.979 "block_size": 4096, 00:22:44.979 "physical_block_size": 4096, 00:22:44.979 "uuid": "21e67e53-de61-4794-a288-f151726e03b0", 00:22:44.979 "optimal_io_boundary": 0, 00:22:44.979 "md_size": 0, 00:22:44.979 "dif_type": 0, 00:22:44.979 "dif_is_head_of_md": false, 00:22:44.979 "dif_pi_format": 0 00:22:44.979 } 00:22:44.979 }, 00:22:44.979 { 00:22:44.979 "method": "bdev_wait_for_examine" 00:22:44.979 } 00:22:44.979 ] 00:22:44.979 }, 00:22:44.979 { 00:22:44.979 "subsystem": "nbd", 00:22:44.979 "config": [] 00:22:44.979 }, 00:22:44.979 { 00:22:44.979 "subsystem": "scheduler", 00:22:44.979 "config": [ 00:22:44.979 { 00:22:44.979 "method": "framework_set_scheduler", 00:22:44.979 "params": { 00:22:44.979 "name": "static" 00:22:44.979 } 00:22:44.979 } 00:22:44.979 ] 00:22:44.979 }, 00:22:44.979 { 00:22:44.979 "subsystem": "nvmf", 00:22:44.979 "config": [ 00:22:44.979 { 00:22:44.979 "method": "nvmf_set_config", 00:22:44.979 "params": { 00:22:44.979 "discovery_filter": "match_any", 00:22:44.979 "admin_cmd_passthru": { 00:22:44.979 "identify_ctrlr": false 00:22:44.979 }, 00:22:44.979 "dhchap_digests": [ 00:22:44.979 "sha256", 00:22:44.979 "sha384", 00:22:44.979 "sha512" 00:22:44.979 ], 00:22:44.979 "dhchap_dhgroups": [ 00:22:44.979 "null", 00:22:44.979 "ffdhe2048", 00:22:44.979 "ffdhe3072", 00:22:44.979 "ffdhe4096", 00:22:44.979 "ffdhe6144", 00:22:44.979 "ffdhe8192" 00:22:44.979 ] 00:22:44.979 } 00:22:44.979 }, 00:22:44.979 { 00:22:44.979 "method": "nvmf_set_max_subsystems", 00:22:44.979 "params": { 00:22:44.979 "max_subsystems": 1024 00:22:44.979 } 00:22:44.979 }, 00:22:44.979 { 00:22:44.979 "method": "nvmf_set_crdt", 00:22:44.979 "params": { 00:22:44.979 "crdt1": 0, 00:22:44.979 "crdt2": 0, 00:22:44.979 "crdt3": 0 00:22:44.979 } 00:22:44.979 }, 00:22:44.979 { 00:22:44.979 "method": "nvmf_create_transport", 00:22:44.979 "params": { 00:22:44.979 "trtype": "TCP", 00:22:44.979 "max_queue_depth": 128, 00:22:44.979 "max_io_qpairs_per_ctrlr": 127, 00:22:44.979 "in_capsule_data_size": 4096, 00:22:44.979 "max_io_size": 131072, 00:22:44.979 "io_unit_size": 131072, 00:22:44.979 "max_aq_depth": 128, 00:22:44.979 "num_shared_buffers": 511, 00:22:44.979 "buf_cache_size": 4294967295, 00:22:44.979 "dif_insert_or_strip": false, 00:22:44.979 "zcopy": false, 00:22:44.979 "c2h_success": false, 00:22:44.979 "sock_priority": 0, 00:22:44.979 "abort_timeout_sec": 1, 00:22:44.979 "ack_timeout": 0, 00:22:44.979 "data_wr_pool_size": 0 00:22:44.979 } 00:22:44.979 }, 00:22:44.979 { 00:22:44.979 "method": "nvmf_create_subsystem", 00:22:44.979 "params": { 00:22:44.979 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.979 "allow_any_host": false, 00:22:44.979 "serial_number": "SPDK00000000000001", 00:22:44.979 "model_number": "SPDK bdev Controller", 00:22:44.979 "max_namespaces": 10, 00:22:44.979 "min_cntlid": 1, 00:22:44.979 "max_cntlid": 65519, 00:22:44.979 "ana_reporting": false 00:22:44.979 } 00:22:44.979 }, 00:22:44.979 { 00:22:44.979 "method": "nvmf_subsystem_add_host", 00:22:44.979 "params": { 00:22:44.979 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.979 "host": "nqn.2016-06.io.spdk:host1", 00:22:44.979 "psk": "key0" 00:22:44.979 } 00:22:44.979 }, 00:22:44.979 { 00:22:44.979 "method": "nvmf_subsystem_add_ns", 00:22:44.979 "params": { 00:22:44.979 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.979 "namespace": { 00:22:44.979 "nsid": 1, 00:22:44.979 "bdev_name": "malloc0", 00:22:44.979 "nguid": "21E67E53DE614794A288F151726E03B0", 00:22:44.979 "uuid": "21e67e53-de61-4794-a288-f151726e03b0", 00:22:44.979 "no_auto_visible": false 00:22:44.979 } 00:22:44.979 } 00:22:44.979 }, 00:22:44.979 { 00:22:44.979 "method": "nvmf_subsystem_add_listener", 00:22:44.979 "params": { 00:22:44.979 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.979 "listen_address": { 00:22:44.979 "trtype": "TCP", 00:22:44.979 "adrfam": "IPv4", 00:22:44.979 "traddr": "10.0.0.2", 00:22:44.979 "trsvcid": "4420" 00:22:44.979 }, 00:22:44.979 "secure_channel": true 00:22:44.979 } 00:22:44.979 } 00:22:44.979 ] 00:22:44.979 } 00:22:44.979 ] 00:22:44.979 }' 00:22:44.979 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:45.239 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:22:45.239 "subsystems": [ 00:22:45.239 { 00:22:45.239 "subsystem": "keyring", 00:22:45.239 "config": [ 00:22:45.239 { 00:22:45.239 "method": "keyring_file_add_key", 00:22:45.239 "params": { 00:22:45.239 "name": "key0", 00:22:45.239 "path": "/tmp/tmp.WvEZEFM6mW" 00:22:45.239 } 00:22:45.239 } 00:22:45.239 ] 00:22:45.239 }, 00:22:45.239 { 00:22:45.239 "subsystem": "iobuf", 00:22:45.239 "config": [ 00:22:45.239 { 00:22:45.239 "method": "iobuf_set_options", 00:22:45.239 "params": { 00:22:45.239 "small_pool_count": 8192, 00:22:45.239 "large_pool_count": 1024, 00:22:45.239 "small_bufsize": 8192, 00:22:45.239 "large_bufsize": 135168 00:22:45.239 } 00:22:45.239 } 00:22:45.239 ] 00:22:45.239 }, 00:22:45.239 { 00:22:45.239 "subsystem": "sock", 00:22:45.239 "config": [ 00:22:45.239 { 00:22:45.239 "method": "sock_set_default_impl", 00:22:45.239 "params": { 00:22:45.239 "impl_name": "posix" 00:22:45.239 } 00:22:45.239 }, 00:22:45.239 { 00:22:45.239 "method": "sock_impl_set_options", 00:22:45.239 "params": { 00:22:45.239 "impl_name": "ssl", 00:22:45.239 "recv_buf_size": 4096, 00:22:45.239 "send_buf_size": 4096, 00:22:45.239 "enable_recv_pipe": true, 00:22:45.239 "enable_quickack": false, 00:22:45.239 "enable_placement_id": 0, 00:22:45.239 "enable_zerocopy_send_server": true, 00:22:45.239 "enable_zerocopy_send_client": false, 00:22:45.239 "zerocopy_threshold": 0, 00:22:45.239 "tls_version": 0, 00:22:45.239 "enable_ktls": false 00:22:45.239 } 00:22:45.239 }, 00:22:45.239 { 00:22:45.239 "method": "sock_impl_set_options", 00:22:45.239 "params": { 00:22:45.239 "impl_name": "posix", 00:22:45.239 "recv_buf_size": 2097152, 00:22:45.239 "send_buf_size": 2097152, 00:22:45.239 "enable_recv_pipe": true, 00:22:45.239 "enable_quickack": false, 00:22:45.239 "enable_placement_id": 0, 00:22:45.239 "enable_zerocopy_send_server": true, 00:22:45.239 "enable_zerocopy_send_client": false, 00:22:45.239 "zerocopy_threshold": 0, 00:22:45.239 "tls_version": 0, 00:22:45.239 "enable_ktls": false 00:22:45.239 } 00:22:45.239 } 00:22:45.239 ] 00:22:45.239 }, 00:22:45.239 { 00:22:45.239 "subsystem": "vmd", 00:22:45.239 "config": [] 00:22:45.239 }, 00:22:45.239 { 00:22:45.239 "subsystem": "accel", 00:22:45.239 "config": [ 00:22:45.239 { 00:22:45.239 "method": "accel_set_options", 00:22:45.239 "params": { 00:22:45.239 "small_cache_size": 128, 00:22:45.239 "large_cache_size": 16, 00:22:45.239 "task_count": 2048, 00:22:45.239 "sequence_count": 2048, 00:22:45.239 "buf_count": 2048 00:22:45.240 } 00:22:45.240 } 00:22:45.240 ] 00:22:45.240 }, 00:22:45.240 { 00:22:45.240 "subsystem": "bdev", 00:22:45.240 "config": [ 00:22:45.240 { 00:22:45.240 "method": "bdev_set_options", 00:22:45.240 "params": { 00:22:45.240 "bdev_io_pool_size": 65535, 00:22:45.240 "bdev_io_cache_size": 256, 00:22:45.240 "bdev_auto_examine": true, 00:22:45.240 "iobuf_small_cache_size": 128, 00:22:45.240 "iobuf_large_cache_size": 16 00:22:45.240 } 00:22:45.240 }, 00:22:45.240 { 00:22:45.240 "method": "bdev_raid_set_options", 00:22:45.240 "params": { 00:22:45.240 "process_window_size_kb": 1024, 00:22:45.240 "process_max_bandwidth_mb_sec": 0 00:22:45.240 } 00:22:45.240 }, 00:22:45.240 { 00:22:45.240 "method": "bdev_iscsi_set_options", 00:22:45.240 "params": { 00:22:45.240 "timeout_sec": 30 00:22:45.240 } 00:22:45.240 }, 00:22:45.240 { 00:22:45.240 "method": "bdev_nvme_set_options", 00:22:45.240 "params": { 00:22:45.240 "action_on_timeout": "none", 00:22:45.240 "timeout_us": 0, 00:22:45.240 "timeout_admin_us": 0, 00:22:45.240 "keep_alive_timeout_ms": 10000, 00:22:45.240 "arbitration_burst": 0, 00:22:45.240 "low_priority_weight": 0, 00:22:45.240 "medium_priority_weight": 0, 00:22:45.240 "high_priority_weight": 0, 00:22:45.240 "nvme_adminq_poll_period_us": 10000, 00:22:45.240 "nvme_ioq_poll_period_us": 0, 00:22:45.240 "io_queue_requests": 512, 00:22:45.240 "delay_cmd_submit": true, 00:22:45.240 "transport_retry_count": 4, 00:22:45.240 "bdev_retry_count": 3, 00:22:45.240 "transport_ack_timeout": 0, 00:22:45.240 "ctrlr_loss_timeout_sec": 0, 00:22:45.240 "reconnect_delay_sec": 0, 00:22:45.240 "fast_io_fail_timeout_sec": 0, 00:22:45.240 "disable_auto_failback": false, 00:22:45.240 "generate_uuids": false, 00:22:45.240 "transport_tos": 0, 00:22:45.240 "nvme_error_stat": false, 00:22:45.240 "rdma_srq_size": 0, 00:22:45.240 "io_path_stat": false, 00:22:45.240 "allow_accel_sequence": false, 00:22:45.240 "rdma_max_cq_size": 0, 00:22:45.240 "rdma_cm_event_timeout_ms": 0, 00:22:45.240 "dhchap_digests": [ 00:22:45.240 "sha256", 00:22:45.240 "sha384", 00:22:45.240 "sha512" 00:22:45.240 ], 00:22:45.240 "dhchap_dhgroups": [ 00:22:45.240 "null", 00:22:45.240 "ffdhe2048", 00:22:45.240 "ffdhe3072", 00:22:45.240 "ffdhe4096", 00:22:45.240 "ffdhe6144", 00:22:45.240 "ffdhe8192" 00:22:45.240 ] 00:22:45.240 } 00:22:45.240 }, 00:22:45.240 { 00:22:45.240 "method": "bdev_nvme_attach_controller", 00:22:45.240 "params": { 00:22:45.240 "name": "TLSTEST", 00:22:45.240 "trtype": "TCP", 00:22:45.240 "adrfam": "IPv4", 00:22:45.240 "traddr": "10.0.0.2", 00:22:45.240 "trsvcid": "4420", 00:22:45.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.240 "prchk_reftag": false, 00:22:45.240 "prchk_guard": false, 00:22:45.240 "ctrlr_loss_timeout_sec": 0, 00:22:45.240 "reconnect_delay_sec": 0, 00:22:45.240 "fast_io_fail_timeout_sec": 0, 00:22:45.240 "psk": "key0", 00:22:45.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.240 "hdgst": false, 00:22:45.240 "ddgst": false 00:22:45.240 } 00:22:45.240 }, 00:22:45.240 { 00:22:45.240 "method": "bdev_nvme_set_hotplug", 00:22:45.240 "params": { 00:22:45.240 "period_us": 100000, 00:22:45.240 "enable": false 00:22:45.240 } 00:22:45.240 }, 00:22:45.240 { 00:22:45.240 "method": "bdev_wait_for_examine" 00:22:45.240 } 00:22:45.240 ] 00:22:45.240 }, 00:22:45.240 { 00:22:45.240 "subsystem": "nbd", 00:22:45.240 "config": [] 00:22:45.240 } 00:22:45.240 ] 00:22:45.240 }' 00:22:45.240 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2090933 00:22:45.240 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2090933 ']' 00:22:45.240 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2090933 00:22:45.240 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:45.240 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.240 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2090933 00:22:45.240 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:45.240 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:45.240 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2090933' 00:22:45.240 killing process with pid 2090933 00:22:45.240 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2090933 00:22:45.240 Received shutdown signal, test time was about 10.000000 seconds 00:22:45.240 00:22:45.240 Latency(us) 00:22:45.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.240 =================================================================================================================== 00:22:45.240 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:45.240 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2090933 00:22:45.500 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2090675 00:22:45.500 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2090675 ']' 00:22:45.500 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2090675 00:22:45.500 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:45.500 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.500 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2090675 00:22:45.500 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:45.500 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:45.500 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2090675' 00:22:45.500 killing process with pid 2090675 00:22:45.500 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2090675 00:22:45.500 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2090675 00:22:45.760 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:45.760 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:45.760 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:45.760 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:22:45.760 "subsystems": [ 00:22:45.760 { 00:22:45.760 "subsystem": "keyring", 00:22:45.760 "config": [ 00:22:45.760 { 00:22:45.760 "method": "keyring_file_add_key", 00:22:45.760 "params": { 00:22:45.760 "name": "key0", 00:22:45.760 "path": "/tmp/tmp.WvEZEFM6mW" 00:22:45.760 } 00:22:45.760 } 00:22:45.760 ] 00:22:45.760 }, 00:22:45.760 { 00:22:45.760 "subsystem": "iobuf", 00:22:45.760 "config": [ 00:22:45.760 { 00:22:45.760 "method": "iobuf_set_options", 00:22:45.760 "params": { 00:22:45.760 "small_pool_count": 8192, 00:22:45.760 "large_pool_count": 1024, 00:22:45.760 "small_bufsize": 8192, 00:22:45.760 "large_bufsize": 135168 00:22:45.760 } 00:22:45.760 } 00:22:45.760 ] 00:22:45.760 }, 00:22:45.760 { 00:22:45.760 "subsystem": "sock", 00:22:45.760 "config": [ 00:22:45.760 { 00:22:45.760 "method": "sock_set_default_impl", 00:22:45.760 "params": { 00:22:45.760 "impl_name": "posix" 00:22:45.760 } 00:22:45.760 }, 00:22:45.760 { 00:22:45.760 "method": "sock_impl_set_options", 00:22:45.760 "params": { 00:22:45.760 "impl_name": "ssl", 00:22:45.760 "recv_buf_size": 4096, 00:22:45.760 "send_buf_size": 4096, 00:22:45.760 "enable_recv_pipe": true, 00:22:45.760 "enable_quickack": false, 00:22:45.760 "enable_placement_id": 0, 00:22:45.760 "enable_zerocopy_send_server": true, 00:22:45.760 "enable_zerocopy_send_client": false, 00:22:45.760 "zerocopy_threshold": 0, 00:22:45.760 "tls_version": 0, 00:22:45.760 "enable_ktls": false 00:22:45.760 } 00:22:45.760 }, 00:22:45.760 { 00:22:45.760 "method": "sock_impl_set_options", 00:22:45.760 "params": { 00:22:45.760 "impl_name": "posix", 00:22:45.760 "recv_buf_size": 2097152, 00:22:45.760 "send_buf_size": 2097152, 00:22:45.760 "enable_recv_pipe": true, 00:22:45.760 "enable_quickack": false, 00:22:45.760 "enable_placement_id": 0, 00:22:45.760 "enable_zerocopy_send_server": true, 00:22:45.760 "enable_zerocopy_send_client": false, 00:22:45.760 "zerocopy_threshold": 0, 00:22:45.760 "tls_version": 0, 00:22:45.760 "enable_ktls": false 00:22:45.760 } 00:22:45.760 } 00:22:45.760 ] 00:22:45.760 }, 00:22:45.760 { 00:22:45.760 "subsystem": "vmd", 00:22:45.760 "config": [] 00:22:45.760 }, 00:22:45.760 { 00:22:45.760 "subsystem": "accel", 00:22:45.760 "config": [ 00:22:45.760 { 00:22:45.760 "method": "accel_set_options", 00:22:45.760 "params": { 00:22:45.760 "small_cache_size": 128, 00:22:45.760 "large_cache_size": 16, 00:22:45.760 "task_count": 2048, 00:22:45.760 "sequence_count": 2048, 00:22:45.760 "buf_count": 2048 00:22:45.760 } 00:22:45.760 } 00:22:45.760 ] 00:22:45.760 }, 00:22:45.760 { 00:22:45.760 "subsystem": "bdev", 00:22:45.760 "config": [ 00:22:45.760 { 00:22:45.761 "method": "bdev_set_options", 00:22:45.761 "params": { 00:22:45.761 "bdev_io_pool_size": 65535, 00:22:45.761 "bdev_io_cache_size": 256, 00:22:45.761 "bdev_auto_examine": true, 00:22:45.761 "iobuf_small_cache_size": 128, 00:22:45.761 "iobuf_large_cache_size": 16 00:22:45.761 } 00:22:45.761 }, 00:22:45.761 { 00:22:45.761 "method": "bdev_raid_set_options", 00:22:45.761 "params": { 00:22:45.761 "process_window_size_kb": 1024, 00:22:45.761 "process_max_bandwidth_mb_sec": 0 00:22:45.761 } 00:22:45.761 }, 00:22:45.761 { 00:22:45.761 "method": "bdev_iscsi_set_options", 00:22:45.761 "params": { 00:22:45.761 "timeout_sec": 30 00:22:45.761 } 00:22:45.761 }, 00:22:45.761 { 00:22:45.761 "method": "bdev_nvme_set_options", 00:22:45.761 "params": { 00:22:45.761 "action_on_timeout": "none", 00:22:45.761 "timeout_us": 0, 00:22:45.761 "timeout_admin_us": 0, 00:22:45.761 "keep_alive_timeout_ms": 10000, 00:22:45.761 "arbitration_burst": 0, 00:22:45.761 "low_priority_weight": 0, 00:22:45.761 "medium_priority_weight": 0, 00:22:45.761 "high_priority_weight": 0, 00:22:45.761 "nvme_adminq_poll_period_us": 10000, 00:22:45.761 "nvme_ioq_poll_period_us": 0, 00:22:45.761 "io_queue_requests": 0, 00:22:45.761 "delay_cmd_submit": true, 00:22:45.761 "transport_retry_count": 4, 00:22:45.761 "bdev_retry_count": 3, 00:22:45.761 "transport_ack_timeout": 0, 00:22:45.761 "ctrlr_loss_timeout_sec": 0, 00:22:45.761 "reconnect_delay_sec": 0, 00:22:45.761 "fast_io_fail_timeout_sec": 0, 00:22:45.761 "disable_auto_failback": false, 00:22:45.761 "generate_uuids": false, 00:22:45.761 "transport_tos": 0, 00:22:45.761 "nvme_error_stat": false, 00:22:45.761 "rdma_srq_size": 0, 00:22:45.761 "io_path_stat": false, 00:22:45.761 "allow_accel_sequence": false, 00:22:45.761 "rdma_max_cq_size": 0, 00:22:45.761 "rdma_cm_event_timeout_ms": 0, 00:22:45.761 "dhchap_digests": [ 00:22:45.761 "sha256", 00:22:45.761 "sha384", 00:22:45.761 "sha512" 00:22:45.761 ], 00:22:45.761 "dhchap_dhgroups": [ 00:22:45.761 "null", 00:22:45.761 "ffdhe2048", 00:22:45.761 "ffdhe3072", 00:22:45.761 "ffdhe4096", 00:22:45.761 "ffdhe6144", 00:22:45.761 "ffdhe8192" 00:22:45.761 ] 00:22:45.761 } 00:22:45.761 }, 00:22:45.761 { 00:22:45.761 "method": "bdev_nvme_set_hotplug", 00:22:45.761 "params": { 00:22:45.761 "period_us": 100000, 00:22:45.761 "enable": false 00:22:45.761 } 00:22:45.761 }, 00:22:45.761 { 00:22:45.761 "method": "bdev_malloc_create", 00:22:45.761 "params": { 00:22:45.761 "name": "malloc0", 00:22:45.761 "num_blocks": 8192, 00:22:45.761 "block_size": 4096, 00:22:45.761 "physical_block_size": 4096, 00:22:45.761 "uuid": "21e67e53-de61-4794-a288-f151726e03b0", 00:22:45.761 "optimal_io_boundary": 0, 00:22:45.761 "md_size": 0, 00:22:45.761 "dif_type": 0, 00:22:45.761 "dif_is_head_of_md": false, 00:22:45.761 "dif_pi_format": 0 00:22:45.761 } 00:22:45.761 }, 00:22:45.761 { 00:22:45.761 "method": "bdev_wait_for_examine" 00:22:45.761 } 00:22:45.761 ] 00:22:45.761 }, 00:22:45.761 { 00:22:45.761 "subsystem": "nbd", 00:22:45.761 "config": [] 00:22:45.761 }, 00:22:45.761 { 00:22:45.761 "subsystem": "scheduler", 00:22:45.761 "config": [ 00:22:45.761 { 00:22:45.761 "method": "framework_set_scheduler", 00:22:45.761 "params": { 00:22:45.761 "name": "static" 00:22:45.761 } 00:22:45.761 } 00:22:45.761 ] 00:22:45.761 }, 00:22:45.761 { 00:22:45.761 "subsystem": "nvmf", 00:22:45.761 "config": [ 00:22:45.761 { 00:22:45.761 "method": "nvmf_set_config", 00:22:45.761 "params": { 00:22:45.761 "discovery_filter": "match_any", 00:22:45.761 "admin_cmd_passthru": { 00:22:45.761 "identify_ctrlr": false 00:22:45.761 }, 00:22:45.761 "dhchap_digests": [ 00:22:45.761 "sha256", 00:22:45.761 "sha384", 00:22:45.761 "sha512" 00:22:45.761 ], 00:22:45.761 "dhchap_dhgroups": [ 00:22:45.761 "null", 00:22:45.761 "ffdhe2048", 00:22:45.761 "ffdhe3072", 00:22:45.761 "ffdhe4096", 00:22:45.761 "ffdhe6144", 00:22:45.761 "ffdhe8192" 00:22:45.761 ] 00:22:45.761 } 00:22:45.761 }, 00:22:45.761 { 00:22:45.761 "method": "nvmf_set_max_subsystems", 00:22:45.761 "params": { 00:22:45.761 "max_subsystems": 1024 00:22:45.761 } 00:22:45.761 }, 00:22:45.761 { 00:22:45.761 "method": "nvmf_set_crdt", 00:22:45.761 "params": { 00:22:45.761 "crdt1": 0, 00:22:45.761 "crdt2": 0, 00:22:45.761 "crdt3": 0 00:22:45.761 } 00:22:45.761 }, 00:22:45.761 { 00:22:45.761 "method": "nvmf_create_transport", 00:22:45.761 "params": { 00:22:45.761 "trtype": "TCP", 00:22:45.761 "max_queue_depth": 128, 00:22:45.761 "max_io_qpairs_per_ctrlr": 127, 00:22:45.761 "in_capsule_data_size": 4096, 00:22:45.761 "max_io_size": 131072, 00:22:45.761 "io_unit_size": 131072, 00:22:45.761 "max_aq_depth": 128, 00:22:45.761 "num_shared_buffers": 511, 00:22:45.761 "buf_cache_size": 4294967295, 00:22:45.761 "dif_insert_or_strip": false, 00:22:45.761 "zcopy": false, 00:22:45.761 "c2h_success": false, 00:22:45.761 "sock_priority": 0, 00:22:45.761 "abort_timeout_sec": 1, 00:22:45.761 "ack_timeout": 0, 00:22:45.761 "data_wr_pool_size": 0 00:22:45.761 } 00:22:45.761 }, 00:22:45.761 { 00:22:45.762 "method": "nvmf_create_subsystem", 00:22:45.762 "params": { 00:22:45.762 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.762 "allow_any_host": false, 00:22:45.762 "serial_number": "SPDK00000000000001", 00:22:45.762 "model_number": "SPDK bdev Controller", 00:22:45.762 "max_namespaces": 10, 00:22:45.762 "min_cntlid": 1, 00:22:45.762 "max_cntlid": 65519, 00:22:45.762 "ana_reporting": false 00:22:45.762 } 00:22:45.762 }, 00:22:45.762 { 00:22:45.762 "method": "nvmf_subsystem_add_host", 00:22:45.762 "params": { 00:22:45.762 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.762 "host": "nqn.2016-06.io.spdk:host1", 00:22:45.762 "psk": "key0" 00:22:45.762 } 00:22:45.762 }, 00:22:45.762 { 00:22:45.762 "method": "nvmf_subsystem_add_ns", 00:22:45.762 "params": { 00:22:45.762 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.762 "namespace": { 00:22:45.762 "nsid": 1, 00:22:45.762 "bdev_name": "malloc0", 00:22:45.762 "nguid": "21E67E53DE614794A288F151726E03B0", 00:22:45.762 "uuid": "21e67e53-de61-4794-a288-f151726e03b0", 00:22:45.762 "no_auto_visible": false 00:22:45.762 } 00:22:45.762 } 00:22:45.762 }, 00:22:45.762 { 00:22:45.762 "method": "nvmf_subsystem_add_listener", 00:22:45.762 "params": { 00:22:45.762 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.762 "listen_address": { 00:22:45.762 "trtype": "TCP", 00:22:45.762 "adrfam": "IPv4", 00:22:45.762 "traddr": "10.0.0.2", 00:22:45.762 "trsvcid": "4420" 00:22:45.762 }, 00:22:45.762 "secure_channel": true 00:22:45.762 } 00:22:45.762 } 00:22:45.762 ] 00:22:45.762 } 00:22:45.762 ] 00:22:45.762 }' 00:22:45.762 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.762 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2091324 00:22:45.762 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:45.762 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2091324 00:22:45.762 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2091324 ']' 00:22:45.762 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.762 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.762 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.762 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.762 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.762 [2024-10-06 11:17:43.251472] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:22:45.762 [2024-10-06 11:17:43.251521] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.762 [2024-10-06 11:17:43.309691] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.021 [2024-10-06 11:17:43.345910] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.021 [2024-10-06 11:17:43.345953] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.021 [2024-10-06 11:17:43.345964] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.022 [2024-10-06 11:17:43.345970] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.022 [2024-10-06 11:17:43.345975] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.022 [2024-10-06 11:17:43.346513] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.022 [2024-10-06 11:17:43.563287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.022 [2024-10-06 11:17:43.595223] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:46.022 [2024-10-06 11:17:43.595420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.590 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:46.590 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:46.590 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:46.590 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:46.590 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.590 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.590 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2091404 00:22:46.590 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2091404 /var/tmp/bdevperf.sock 00:22:46.590 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2091404 ']' 00:22:46.590 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:46.590 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.590 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:46.590 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:22:46.590 "subsystems": [ 00:22:46.590 { 00:22:46.590 "subsystem": "keyring", 00:22:46.590 "config": [ 00:22:46.590 { 00:22:46.590 "method": "keyring_file_add_key", 00:22:46.590 "params": { 00:22:46.590 "name": "key0", 00:22:46.590 "path": "/tmp/tmp.WvEZEFM6mW" 00:22:46.590 } 00:22:46.590 } 00:22:46.590 ] 00:22:46.591 }, 00:22:46.591 { 00:22:46.591 "subsystem": "iobuf", 00:22:46.591 "config": [ 00:22:46.591 { 00:22:46.591 "method": "iobuf_set_options", 00:22:46.591 "params": { 00:22:46.591 "small_pool_count": 8192, 00:22:46.591 "large_pool_count": 1024, 00:22:46.591 "small_bufsize": 8192, 00:22:46.591 "large_bufsize": 135168 00:22:46.591 } 00:22:46.591 } 00:22:46.591 ] 00:22:46.591 }, 00:22:46.591 { 00:22:46.591 "subsystem": "sock", 00:22:46.591 "config": [ 00:22:46.591 { 00:22:46.591 "method": "sock_set_default_impl", 00:22:46.591 "params": { 00:22:46.591 "impl_name": "posix" 00:22:46.591 } 00:22:46.591 }, 00:22:46.591 { 00:22:46.591 "method": "sock_impl_set_options", 00:22:46.591 "params": { 00:22:46.591 "impl_name": "ssl", 00:22:46.591 "recv_buf_size": 4096, 00:22:46.591 "send_buf_size": 4096, 00:22:46.591 "enable_recv_pipe": true, 00:22:46.591 "enable_quickack": false, 00:22:46.591 "enable_placement_id": 0, 00:22:46.591 "enable_zerocopy_send_server": true, 00:22:46.591 "enable_zerocopy_send_client": false, 00:22:46.591 "zerocopy_threshold": 0, 00:22:46.591 "tls_version": 0, 00:22:46.591 "enable_ktls": false 00:22:46.591 } 00:22:46.591 }, 00:22:46.591 { 00:22:46.591 "method": "sock_impl_set_options", 00:22:46.591 "params": { 00:22:46.591 "impl_name": "posix", 00:22:46.591 "recv_buf_size": 2097152, 00:22:46.591 "send_buf_size": 2097152, 00:22:46.591 "enable_recv_pipe": true, 00:22:46.591 "enable_quickack": false, 00:22:46.591 "enable_placement_id": 0, 00:22:46.591 "enable_zerocopy_send_server": true, 00:22:46.591 "enable_zerocopy_send_client": false, 00:22:46.591 "zerocopy_threshold": 0, 00:22:46.591 "tls_version": 0, 00:22:46.591 "enable_ktls": false 00:22:46.591 } 00:22:46.591 } 00:22:46.591 ] 00:22:46.591 }, 00:22:46.591 { 00:22:46.591 "subsystem": "vmd", 00:22:46.591 "config": [] 00:22:46.591 }, 00:22:46.591 { 00:22:46.591 "subsystem": "accel", 00:22:46.591 "config": [ 00:22:46.591 { 00:22:46.591 "method": "accel_set_options", 00:22:46.591 "params": { 00:22:46.591 "small_cache_size": 128, 00:22:46.591 "large_cache_size": 16, 00:22:46.591 "task_count": 2048, 00:22:46.591 "sequence_count": 2048, 00:22:46.591 "buf_count": 2048 00:22:46.591 } 00:22:46.591 } 00:22:46.591 ] 00:22:46.591 }, 00:22:46.591 { 00:22:46.591 "subsystem": "bdev", 00:22:46.591 "config": [ 00:22:46.591 { 00:22:46.591 "method": "bdev_set_options", 00:22:46.591 "params": { 00:22:46.591 "bdev_io_pool_size": 65535, 00:22:46.591 "bdev_io_cache_size": 256, 00:22:46.591 "bdev_auto_examine": true, 00:22:46.591 "iobuf_small_cache_size": 128, 00:22:46.591 "iobuf_large_cache_size": 16 00:22:46.591 } 00:22:46.591 }, 00:22:46.591 { 00:22:46.591 "method": "bdev_raid_set_options", 00:22:46.591 "params": { 00:22:46.591 "process_window_size_kb": 1024, 00:22:46.591 "process_max_bandwidth_mb_sec": 0 00:22:46.591 } 00:22:46.591 }, 00:22:46.591 { 00:22:46.591 "method": "bdev_iscsi_set_options", 00:22:46.591 "params": { 00:22:46.591 "timeout_sec": 30 00:22:46.591 } 00:22:46.591 }, 00:22:46.591 { 00:22:46.591 "method": "bdev_nvme_set_options", 00:22:46.591 "params": { 00:22:46.591 "action_on_timeout": "none", 00:22:46.591 "timeout_us": 0, 00:22:46.591 "timeout_admin_us": 0, 00:22:46.591 "keep_alive_timeout_ms": 10000, 00:22:46.591 "arbitration_burst": 0, 00:22:46.591 "low_priority_weight": 0, 00:22:46.591 "medium_priority_weight": 0, 00:22:46.591 "high_priority_weight": 0, 00:22:46.591 "nvme_adminq_poll_period_us": 10000, 00:22:46.591 "nvme_ioq_poll_period_us": 0, 00:22:46.591 "io_queue_requests": 512, 00:22:46.591 "delay_cmd_submit": true, 00:22:46.591 "transport_retry_count": 4, 00:22:46.591 "bdev_retry_count": 3, 00:22:46.591 "transport_ack_timeout": 0, 00:22:46.591 "ctrlr_loss_timeout_sec": 0, 00:22:46.591 "reconnect_delay_sec": 0, 00:22:46.591 "fast_io_fail_timeout_sec": 0, 00:22:46.591 "disable_auto_failback": false, 00:22:46.591 "generate_uuids": false, 00:22:46.591 "transport_tos": 0, 00:22:46.591 "nvme_error_stat": false, 00:22:46.591 "rdma_srq_size": 0, 00:22:46.591 "io_path_stat": false, 00:22:46.591 "allow_accel_sequence": false, 00:22:46.591 "rdma_max_cq_size": 0, 00:22:46.591 "rdma_cm_event_timeout_ms": 0, 00:22:46.591 "dhchap_digests": [ 00:22:46.591 "sha256", 00:22:46.591 "sha384", 00:22:46.591 "sha512" 00:22:46.591 ], 00:22:46.591 "dhchap_dhgroups": [ 00:22:46.591 "null", 00:22:46.591 "ffdhe2048", 00:22:46.591 "ffdhe3072", 00:22:46.591 "ffdhe4096", 00:22:46.591 "ffdhe6144", 00:22:46.591 "ffdhe8192" 00:22:46.591 ] 00:22:46.591 } 00:22:46.591 }, 00:22:46.591 { 00:22:46.591 "method": "bdev_nvme_attach_controller", 00:22:46.591 "params": { 00:22:46.591 "name": "TLSTEST", 00:22:46.591 "trtype": "TCP", 00:22:46.591 "adrfam": "IPv4", 00:22:46.591 "traddr": "10.0.0.2", 00:22:46.591 "trsvcid": "4420", 00:22:46.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.591 "prchk_reftag": false, 00:22:46.591 "prchk_guard": false, 00:22:46.591 "ctrlr_loss_timeout_sec": 0, 00:22:46.591 "reconnect_delay_sec": 0, 00:22:46.591 "fast_io_fail_timeout_sec": 0, 00:22:46.591 "psk": "key0", 00:22:46.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:46.591 "hdgst": false, 00:22:46.591 "ddgst": false 00:22:46.591 } 00:22:46.591 }, 00:22:46.591 { 00:22:46.591 "method": "bdev_nvme_set_hotplug", 00:22:46.591 "params": { 00:22:46.591 "period_us": 100000, 00:22:46.591 "enable": false 00:22:46.591 } 00:22:46.591 }, 00:22:46.591 { 00:22:46.591 "method": "bdev_wait_for_examine" 00:22:46.591 } 00:22:46.591 ] 00:22:46.591 }, 00:22:46.591 { 00:22:46.591 "subsystem": "nbd", 00:22:46.591 "config": [] 00:22:46.591 } 00:22:46.591 ] 00:22:46.591 }' 00:22:46.591 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.591 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:46.591 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.591 [2024-10-06 11:17:44.151840] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:22:46.591 [2024-10-06 11:17:44.151891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2091404 ] 00:22:46.851 [2024-10-06 11:17:44.201571] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.851 [2024-10-06 11:17:44.241165] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.851 [2024-10-06 11:17:44.388313] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:47.421 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:47.421 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:47.421 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:47.679 Running I/O for 10 seconds... 00:22:57.561 5510.00 IOPS, 21.52 MiB/s 4788.50 IOPS, 18.71 MiB/s 4051.00 IOPS, 15.82 MiB/s 3696.25 IOPS, 14.44 MiB/s 3550.20 IOPS, 13.87 MiB/s 3358.33 IOPS, 13.12 MiB/s 3235.29 IOPS, 12.64 MiB/s 3141.38 IOPS, 12.27 MiB/s 3098.78 IOPS, 12.10 MiB/s 3042.80 IOPS, 11.89 MiB/s 00:22:57.562 Latency(us) 00:22:57.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.562 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:57.562 Verification LBA range: start 0x0 length 0x2000 00:22:57.562 TLSTESTn1 : 10.03 3046.30 11.90 0.00 0.00 41954.87 6179.11 70404.39 00:22:57.562 =================================================================================================================== 00:22:57.562 Total : 3046.30 11.90 0.00 0.00 41954.87 6179.11 70404.39 00:22:57.562 { 00:22:57.562 "results": [ 00:22:57.562 { 00:22:57.562 "job": "TLSTESTn1", 00:22:57.562 "core_mask": "0x4", 00:22:57.562 "workload": "verify", 00:22:57.562 "status": "finished", 00:22:57.562 "verify_range": { 00:22:57.562 "start": 0, 00:22:57.562 "length": 8192 00:22:57.562 }, 00:22:57.562 "queue_depth": 128, 00:22:57.562 "io_size": 4096, 00:22:57.562 "runtime": 10.030545, 00:22:57.562 "iops": 3046.295091642578, 00:22:57.562 "mibps": 11.89959020172882, 00:22:57.562 "io_failed": 0, 00:22:57.562 "io_timeout": 0, 00:22:57.562 "avg_latency_us": 41954.872054806474, 00:22:57.562 "min_latency_us": 6179.108571428572, 00:22:57.562 "max_latency_us": 70404.38857142857 00:22:57.562 } 00:22:57.562 ], 00:22:57.562 "core_count": 1 00:22:57.562 } 00:22:57.562 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.562 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2091404 00:22:57.562 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2091404 ']' 00:22:57.562 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2091404 00:22:57.562 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:57.821 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:57.821 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2091404 00:22:57.821 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:57.821 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:57.821 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2091404' 00:22:57.821 killing process with pid 2091404 00:22:57.821 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2091404 00:22:57.821 Received shutdown signal, test time was about 10.000000 seconds 00:22:57.821 00:22:57.821 Latency(us) 00:22:57.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.821 =================================================================================================================== 00:22:57.821 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:57.821 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2091404 00:22:57.821 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2091324 00:22:57.821 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2091324 ']' 00:22:57.821 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2091324 00:22:57.821 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:57.821 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:57.821 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2091324 00:22:58.080 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:58.080 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:58.080 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2091324' 00:22:58.080 killing process with pid 2091324 00:22:58.080 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2091324 00:22:58.080 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2091324 00:22:58.080 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:22:58.080 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:58.080 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:58.080 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.080 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2093249 00:22:58.080 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2093249 00:22:58.080 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:58.080 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2093249 ']' 00:22:58.080 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.080 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:58.080 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.080 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:58.080 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.340 [2024-10-06 11:17:55.665641] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:22:58.340 [2024-10-06 11:17:55.665690] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.340 [2024-10-06 11:17:55.727258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.340 [2024-10-06 11:17:55.764094] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.340 [2024-10-06 11:17:55.764138] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.340 [2024-10-06 11:17:55.764145] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.340 [2024-10-06 11:17:55.764151] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.340 [2024-10-06 11:17:55.764157] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.340 [2024-10-06 11:17:55.764692] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.340 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:58.340 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:58.340 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:58.340 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.340 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.340 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.340 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.WvEZEFM6mW 00:22:58.340 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.WvEZEFM6mW 00:22:58.340 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:58.599 [2024-10-06 11:17:56.057605] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.599 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:58.857 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:58.857 [2024-10-06 11:17:56.418543] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:58.857 [2024-10-06 11:17:56.418782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.116 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:59.116 malloc0 00:22:59.116 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:59.374 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.WvEZEFM6mW 00:22:59.634 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:59.634 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2093611 00:22:59.634 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:59.634 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.634 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2093611 /var/tmp/bdevperf.sock 00:22:59.634 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2093611 ']' 00:22:59.634 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.634 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:59.634 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.634 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:59.634 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.894 [2024-10-06 11:17:57.227228] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:22:59.894 [2024-10-06 11:17:57.227281] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093611 ] 00:22:59.894 [2024-10-06 11:17:57.281854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.894 [2024-10-06 11:17:57.321711] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.894 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.894 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:59.894 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WvEZEFM6mW 00:23:00.153 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:00.411 [2024-10-06 11:17:57.755930] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.411 nvme0n1 00:23:00.411 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:00.411 Running I/O for 1 seconds... 00:23:01.618 5442.00 IOPS, 21.26 MiB/s 00:23:01.618 Latency(us) 00:23:01.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.618 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.618 Verification LBA range: start 0x0 length 0x2000 00:23:01.618 nvme0n1 : 1.02 5455.88 21.31 0.00 0.00 23240.63 6085.49 45687.95 00:23:01.618 =================================================================================================================== 00:23:01.618 Total : 5455.88 21.31 0.00 0.00 23240.63 6085.49 45687.95 00:23:01.618 { 00:23:01.618 "results": [ 00:23:01.618 { 00:23:01.618 "job": "nvme0n1", 00:23:01.618 "core_mask": "0x2", 00:23:01.618 "workload": "verify", 00:23:01.618 "status": "finished", 00:23:01.618 "verify_range": { 00:23:01.618 "start": 0, 00:23:01.618 "length": 8192 00:23:01.618 }, 00:23:01.618 "queue_depth": 128, 00:23:01.618 "io_size": 4096, 00:23:01.618 "runtime": 1.021101, 00:23:01.618 "iops": 5455.875569605749, 00:23:01.618 "mibps": 21.312013943772456, 00:23:01.618 "io_failed": 0, 00:23:01.618 "io_timeout": 0, 00:23:01.618 "avg_latency_us": 23240.62648511424, 00:23:01.618 "min_latency_us": 6085.4857142857145, 00:23:01.618 "max_latency_us": 45687.95428571429 00:23:01.618 } 00:23:01.618 ], 00:23:01.618 "core_count": 1 00:23:01.618 } 00:23:01.618 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2093611 00:23:01.618 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2093611 ']' 00:23:01.618 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2093611 00:23:01.618 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:01.618 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:01.618 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2093611 00:23:01.618 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:01.618 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:01.618 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2093611' 00:23:01.618 killing process with pid 2093611 00:23:01.618 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2093611 00:23:01.618 Received shutdown signal, test time was about 1.000000 seconds 00:23:01.618 00:23:01.618 Latency(us) 00:23:01.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.618 =================================================================================================================== 00:23:01.618 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.618 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2093611 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2093249 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2093249 ']' 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2093249 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2093249 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2093249' 00:23:01.878 killing process with pid 2093249 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2093249 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2093249 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2093906 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2093906 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2093906 ']' 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:01.878 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.137 [2024-10-06 11:17:59.476856] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:23:02.137 [2024-10-06 11:17:59.476902] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.137 [2024-10-06 11:17:59.533805] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.137 [2024-10-06 11:17:59.572739] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.138 [2024-10-06 11:17:59.572776] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.138 [2024-10-06 11:17:59.572783] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.138 [2024-10-06 11:17:59.572789] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.138 [2024-10-06 11:17:59.572798] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.138 [2024-10-06 11:17:59.573314] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.138 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:02.138 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:02.138 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:02.138 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:02.138 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.138 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.138 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:02.138 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.138 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.138 [2024-10-06 11:17:59.693526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.397 malloc0 00:23:02.397 [2024-10-06 11:17:59.736360] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:02.397 [2024-10-06 11:17:59.736566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.397 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.397 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2093964 00:23:02.397 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2093964 /var/tmp/bdevperf.sock 00:23:02.397 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:02.397 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2093964 ']' 00:23:02.397 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.397 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:02.397 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.397 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:02.397 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.397 [2024-10-06 11:17:59.811031] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:23:02.397 [2024-10-06 11:17:59.811105] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093964 ] 00:23:02.397 [2024-10-06 11:17:59.866267] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.397 [2024-10-06 11:17:59.906778] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.656 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:02.656 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:02.656 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WvEZEFM6mW 00:23:02.656 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:02.915 [2024-10-06 11:18:00.356146] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:02.915 nvme0n1 00:23:02.915 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:03.175 Running I/O for 1 seconds... 00:23:04.252 2662.00 IOPS, 10.40 MiB/s 00:23:04.252 Latency(us) 00:23:04.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.252 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:04.252 Verification LBA range: start 0x0 length 0x2000 00:23:04.252 nvme0n1 : 1.03 2702.61 10.56 0.00 0.00 46911.61 6085.49 65411.17 00:23:04.252 =================================================================================================================== 00:23:04.252 Total : 2702.61 10.56 0.00 0.00 46911.61 6085.49 65411.17 00:23:04.252 { 00:23:04.252 "results": [ 00:23:04.252 { 00:23:04.252 "job": "nvme0n1", 00:23:04.252 "core_mask": "0x2", 00:23:04.252 "workload": "verify", 00:23:04.252 "status": "finished", 00:23:04.252 "verify_range": { 00:23:04.252 "start": 0, 00:23:04.252 "length": 8192 00:23:04.252 }, 00:23:04.252 "queue_depth": 128, 00:23:04.252 "io_size": 4096, 00:23:04.252 "runtime": 1.032334, 00:23:04.252 "iops": 2702.6136889805043, 00:23:04.252 "mibps": 10.557084722580095, 00:23:04.252 "io_failed": 0, 00:23:04.252 "io_timeout": 0, 00:23:04.252 "avg_latency_us": 46911.611739887354, 00:23:04.252 "min_latency_us": 6085.4857142857145, 00:23:04.252 "max_latency_us": 65411.16952380953 00:23:04.252 } 00:23:04.252 ], 00:23:04.252 "core_count": 1 00:23:04.252 } 00:23:04.252 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:04.252 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.252 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.252 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.252 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:04.252 "subsystems": [ 00:23:04.252 { 00:23:04.252 "subsystem": "keyring", 00:23:04.252 "config": [ 00:23:04.252 { 00:23:04.252 "method": "keyring_file_add_key", 00:23:04.252 "params": { 00:23:04.252 "name": "key0", 00:23:04.252 "path": "/tmp/tmp.WvEZEFM6mW" 00:23:04.252 } 00:23:04.252 } 00:23:04.252 ] 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "subsystem": "iobuf", 00:23:04.252 "config": [ 00:23:04.252 { 00:23:04.252 "method": "iobuf_set_options", 00:23:04.252 "params": { 00:23:04.252 "small_pool_count": 8192, 00:23:04.252 "large_pool_count": 1024, 00:23:04.252 "small_bufsize": 8192, 00:23:04.252 "large_bufsize": 135168 00:23:04.252 } 00:23:04.252 } 00:23:04.252 ] 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "subsystem": "sock", 00:23:04.252 "config": [ 00:23:04.252 { 00:23:04.252 "method": "sock_set_default_impl", 00:23:04.252 "params": { 00:23:04.252 "impl_name": "posix" 00:23:04.252 } 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "method": "sock_impl_set_options", 00:23:04.252 "params": { 00:23:04.252 "impl_name": "ssl", 00:23:04.252 "recv_buf_size": 4096, 00:23:04.252 "send_buf_size": 4096, 00:23:04.252 "enable_recv_pipe": true, 00:23:04.252 "enable_quickack": false, 00:23:04.252 "enable_placement_id": 0, 00:23:04.252 "enable_zerocopy_send_server": true, 00:23:04.252 "enable_zerocopy_send_client": false, 00:23:04.252 "zerocopy_threshold": 0, 00:23:04.252 "tls_version": 0, 00:23:04.252 "enable_ktls": false 00:23:04.252 } 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "method": "sock_impl_set_options", 00:23:04.252 "params": { 00:23:04.252 "impl_name": "posix", 00:23:04.252 "recv_buf_size": 2097152, 00:23:04.252 "send_buf_size": 2097152, 00:23:04.252 "enable_recv_pipe": true, 00:23:04.252 "enable_quickack": false, 00:23:04.252 "enable_placement_id": 0, 00:23:04.252 "enable_zerocopy_send_server": true, 00:23:04.252 "enable_zerocopy_send_client": false, 00:23:04.252 "zerocopy_threshold": 0, 00:23:04.252 "tls_version": 0, 00:23:04.252 "enable_ktls": false 00:23:04.252 } 00:23:04.252 } 00:23:04.252 ] 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "subsystem": "vmd", 00:23:04.252 "config": [] 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "subsystem": "accel", 00:23:04.252 "config": [ 00:23:04.252 { 00:23:04.252 "method": "accel_set_options", 00:23:04.252 "params": { 00:23:04.252 "small_cache_size": 128, 00:23:04.252 "large_cache_size": 16, 00:23:04.252 "task_count": 2048, 00:23:04.252 "sequence_count": 2048, 00:23:04.252 "buf_count": 2048 00:23:04.252 } 00:23:04.252 } 00:23:04.252 ] 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "subsystem": "bdev", 00:23:04.252 "config": [ 00:23:04.252 { 00:23:04.252 "method": "bdev_set_options", 00:23:04.252 "params": { 00:23:04.252 "bdev_io_pool_size": 65535, 00:23:04.252 "bdev_io_cache_size": 256, 00:23:04.252 "bdev_auto_examine": true, 00:23:04.252 "iobuf_small_cache_size": 128, 00:23:04.252 "iobuf_large_cache_size": 16 00:23:04.252 } 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "method": "bdev_raid_set_options", 00:23:04.252 "params": { 00:23:04.252 "process_window_size_kb": 1024, 00:23:04.252 "process_max_bandwidth_mb_sec": 0 00:23:04.252 } 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "method": "bdev_iscsi_set_options", 00:23:04.252 "params": { 00:23:04.252 "timeout_sec": 30 00:23:04.252 } 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "method": "bdev_nvme_set_options", 00:23:04.252 "params": { 00:23:04.252 "action_on_timeout": "none", 00:23:04.252 "timeout_us": 0, 00:23:04.252 "timeout_admin_us": 0, 00:23:04.252 "keep_alive_timeout_ms": 10000, 00:23:04.252 "arbitration_burst": 0, 00:23:04.252 "low_priority_weight": 0, 00:23:04.252 "medium_priority_weight": 0, 00:23:04.252 "high_priority_weight": 0, 00:23:04.252 "nvme_adminq_poll_period_us": 10000, 00:23:04.252 "nvme_ioq_poll_period_us": 0, 00:23:04.252 "io_queue_requests": 0, 00:23:04.252 "delay_cmd_submit": true, 00:23:04.252 "transport_retry_count": 4, 00:23:04.252 "bdev_retry_count": 3, 00:23:04.252 "transport_ack_timeout": 0, 00:23:04.252 "ctrlr_loss_timeout_sec": 0, 00:23:04.252 "reconnect_delay_sec": 0, 00:23:04.252 "fast_io_fail_timeout_sec": 0, 00:23:04.252 "disable_auto_failback": false, 00:23:04.252 "generate_uuids": false, 00:23:04.252 "transport_tos": 0, 00:23:04.252 "nvme_error_stat": false, 00:23:04.252 "rdma_srq_size": 0, 00:23:04.252 "io_path_stat": false, 00:23:04.252 "allow_accel_sequence": false, 00:23:04.252 "rdma_max_cq_size": 0, 00:23:04.252 "rdma_cm_event_timeout_ms": 0, 00:23:04.252 "dhchap_digests": [ 00:23:04.252 "sha256", 00:23:04.252 "sha384", 00:23:04.252 "sha512" 00:23:04.252 ], 00:23:04.252 "dhchap_dhgroups": [ 00:23:04.252 "null", 00:23:04.252 "ffdhe2048", 00:23:04.252 "ffdhe3072", 00:23:04.252 "ffdhe4096", 00:23:04.252 "ffdhe6144", 00:23:04.252 "ffdhe8192" 00:23:04.252 ] 00:23:04.252 } 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "method": "bdev_nvme_set_hotplug", 00:23:04.252 "params": { 00:23:04.252 "period_us": 100000, 00:23:04.252 "enable": false 00:23:04.252 } 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "method": "bdev_malloc_create", 00:23:04.252 "params": { 00:23:04.252 "name": "malloc0", 00:23:04.252 "num_blocks": 8192, 00:23:04.252 "block_size": 4096, 00:23:04.252 "physical_block_size": 4096, 00:23:04.252 "uuid": "9b95d33f-b48b-4e16-b231-2a6c072d44a0", 00:23:04.252 "optimal_io_boundary": 0, 00:23:04.252 "md_size": 0, 00:23:04.252 "dif_type": 0, 00:23:04.252 "dif_is_head_of_md": false, 00:23:04.252 "dif_pi_format": 0 00:23:04.252 } 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "method": "bdev_wait_for_examine" 00:23:04.252 } 00:23:04.252 ] 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "subsystem": "nbd", 00:23:04.252 "config": [] 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "subsystem": "scheduler", 00:23:04.252 "config": [ 00:23:04.252 { 00:23:04.252 "method": "framework_set_scheduler", 00:23:04.252 "params": { 00:23:04.252 "name": "static" 00:23:04.252 } 00:23:04.252 } 00:23:04.252 ] 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "subsystem": "nvmf", 00:23:04.252 "config": [ 00:23:04.252 { 00:23:04.252 "method": "nvmf_set_config", 00:23:04.252 "params": { 00:23:04.252 "discovery_filter": "match_any", 00:23:04.252 "admin_cmd_passthru": { 00:23:04.252 "identify_ctrlr": false 00:23:04.252 }, 00:23:04.252 "dhchap_digests": [ 00:23:04.252 "sha256", 00:23:04.252 "sha384", 00:23:04.252 "sha512" 00:23:04.252 ], 00:23:04.252 "dhchap_dhgroups": [ 00:23:04.252 "null", 00:23:04.252 "ffdhe2048", 00:23:04.252 "ffdhe3072", 00:23:04.252 "ffdhe4096", 00:23:04.252 "ffdhe6144", 00:23:04.252 "ffdhe8192" 00:23:04.252 ] 00:23:04.252 } 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "method": "nvmf_set_max_subsystems", 00:23:04.252 "params": { 00:23:04.252 "max_subsystems": 1024 00:23:04.252 } 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "method": "nvmf_set_crdt", 00:23:04.252 "params": { 00:23:04.252 "crdt1": 0, 00:23:04.252 "crdt2": 0, 00:23:04.252 "crdt3": 0 00:23:04.252 } 00:23:04.252 }, 00:23:04.252 { 00:23:04.252 "method": "nvmf_create_transport", 00:23:04.252 "params": { 00:23:04.252 "trtype": "TCP", 00:23:04.252 "max_queue_depth": 128, 00:23:04.252 "max_io_qpairs_per_ctrlr": 127, 00:23:04.252 "in_capsule_data_size": 4096, 00:23:04.253 "max_io_size": 131072, 00:23:04.253 "io_unit_size": 131072, 00:23:04.253 "max_aq_depth": 128, 00:23:04.253 "num_shared_buffers": 511, 00:23:04.253 "buf_cache_size": 4294967295, 00:23:04.253 "dif_insert_or_strip": false, 00:23:04.253 "zcopy": false, 00:23:04.253 "c2h_success": false, 00:23:04.253 "sock_priority": 0, 00:23:04.253 "abort_timeout_sec": 1, 00:23:04.253 "ack_timeout": 0, 00:23:04.253 "data_wr_pool_size": 0 00:23:04.253 } 00:23:04.253 }, 00:23:04.253 { 00:23:04.253 "method": "nvmf_create_subsystem", 00:23:04.253 "params": { 00:23:04.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.253 "allow_any_host": false, 00:23:04.253 "serial_number": "00000000000000000000", 00:23:04.253 "model_number": "SPDK bdev Controller", 00:23:04.253 "max_namespaces": 32, 00:23:04.253 "min_cntlid": 1, 00:23:04.253 "max_cntlid": 65519, 00:23:04.253 "ana_reporting": false 00:23:04.253 } 00:23:04.253 }, 00:23:04.253 { 00:23:04.253 "method": "nvmf_subsystem_add_host", 00:23:04.253 "params": { 00:23:04.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.253 "host": "nqn.2016-06.io.spdk:host1", 00:23:04.253 "psk": "key0" 00:23:04.253 } 00:23:04.253 }, 00:23:04.253 { 00:23:04.253 "method": "nvmf_subsystem_add_ns", 00:23:04.253 "params": { 00:23:04.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.253 "namespace": { 00:23:04.253 "nsid": 1, 00:23:04.253 "bdev_name": "malloc0", 00:23:04.253 "nguid": "9B95D33FB48B4E16B2312A6C072D44A0", 00:23:04.253 "uuid": "9b95d33f-b48b-4e16-b231-2a6c072d44a0", 00:23:04.253 "no_auto_visible": false 00:23:04.253 } 00:23:04.253 } 00:23:04.253 }, 00:23:04.253 { 00:23:04.253 "method": "nvmf_subsystem_add_listener", 00:23:04.253 "params": { 00:23:04.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.253 "listen_address": { 00:23:04.253 "trtype": "TCP", 00:23:04.253 "adrfam": "IPv4", 00:23:04.253 "traddr": "10.0.0.2", 00:23:04.253 "trsvcid": "4420" 00:23:04.253 }, 00:23:04.253 "secure_channel": false, 00:23:04.253 "sock_impl": "ssl" 00:23:04.253 } 00:23:04.253 } 00:23:04.253 ] 00:23:04.253 } 00:23:04.253 ] 00:23:04.253 }' 00:23:04.253 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:04.521 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:04.521 "subsystems": [ 00:23:04.521 { 00:23:04.521 "subsystem": "keyring", 00:23:04.521 "config": [ 00:23:04.521 { 00:23:04.521 "method": "keyring_file_add_key", 00:23:04.521 "params": { 00:23:04.521 "name": "key0", 00:23:04.521 "path": "/tmp/tmp.WvEZEFM6mW" 00:23:04.521 } 00:23:04.521 } 00:23:04.521 ] 00:23:04.521 }, 00:23:04.521 { 00:23:04.521 "subsystem": "iobuf", 00:23:04.521 "config": [ 00:23:04.521 { 00:23:04.521 "method": "iobuf_set_options", 00:23:04.521 "params": { 00:23:04.521 "small_pool_count": 8192, 00:23:04.521 "large_pool_count": 1024, 00:23:04.521 "small_bufsize": 8192, 00:23:04.521 "large_bufsize": 135168 00:23:04.521 } 00:23:04.521 } 00:23:04.521 ] 00:23:04.521 }, 00:23:04.521 { 00:23:04.521 "subsystem": "sock", 00:23:04.521 "config": [ 00:23:04.521 { 00:23:04.521 "method": "sock_set_default_impl", 00:23:04.521 "params": { 00:23:04.521 "impl_name": "posix" 00:23:04.521 } 00:23:04.521 }, 00:23:04.521 { 00:23:04.521 "method": "sock_impl_set_options", 00:23:04.521 "params": { 00:23:04.521 "impl_name": "ssl", 00:23:04.521 "recv_buf_size": 4096, 00:23:04.521 "send_buf_size": 4096, 00:23:04.521 "enable_recv_pipe": true, 00:23:04.521 "enable_quickack": false, 00:23:04.521 "enable_placement_id": 0, 00:23:04.521 "enable_zerocopy_send_server": true, 00:23:04.521 "enable_zerocopy_send_client": false, 00:23:04.521 "zerocopy_threshold": 0, 00:23:04.521 "tls_version": 0, 00:23:04.521 "enable_ktls": false 00:23:04.521 } 00:23:04.521 }, 00:23:04.521 { 00:23:04.521 "method": "sock_impl_set_options", 00:23:04.521 "params": { 00:23:04.522 "impl_name": "posix", 00:23:04.522 "recv_buf_size": 2097152, 00:23:04.522 "send_buf_size": 2097152, 00:23:04.522 "enable_recv_pipe": true, 00:23:04.522 "enable_quickack": false, 00:23:04.522 "enable_placement_id": 0, 00:23:04.522 "enable_zerocopy_send_server": true, 00:23:04.522 "enable_zerocopy_send_client": false, 00:23:04.522 "zerocopy_threshold": 0, 00:23:04.522 "tls_version": 0, 00:23:04.522 "enable_ktls": false 00:23:04.522 } 00:23:04.522 } 00:23:04.522 ] 00:23:04.522 }, 00:23:04.522 { 00:23:04.522 "subsystem": "vmd", 00:23:04.522 "config": [] 00:23:04.522 }, 00:23:04.522 { 00:23:04.522 "subsystem": "accel", 00:23:04.522 "config": [ 00:23:04.522 { 00:23:04.522 "method": "accel_set_options", 00:23:04.522 "params": { 00:23:04.522 "small_cache_size": 128, 00:23:04.522 "large_cache_size": 16, 00:23:04.522 "task_count": 2048, 00:23:04.522 "sequence_count": 2048, 00:23:04.522 "buf_count": 2048 00:23:04.522 } 00:23:04.522 } 00:23:04.522 ] 00:23:04.522 }, 00:23:04.522 { 00:23:04.522 "subsystem": "bdev", 00:23:04.522 "config": [ 00:23:04.522 { 00:23:04.522 "method": "bdev_set_options", 00:23:04.522 "params": { 00:23:04.522 "bdev_io_pool_size": 65535, 00:23:04.522 "bdev_io_cache_size": 256, 00:23:04.522 "bdev_auto_examine": true, 00:23:04.522 "iobuf_small_cache_size": 128, 00:23:04.522 "iobuf_large_cache_size": 16 00:23:04.522 } 00:23:04.522 }, 00:23:04.522 { 00:23:04.522 "method": "bdev_raid_set_options", 00:23:04.522 "params": { 00:23:04.522 "process_window_size_kb": 1024, 00:23:04.522 "process_max_bandwidth_mb_sec": 0 00:23:04.522 } 00:23:04.522 }, 00:23:04.522 { 00:23:04.522 "method": "bdev_iscsi_set_options", 00:23:04.522 "params": { 00:23:04.522 "timeout_sec": 30 00:23:04.522 } 00:23:04.522 }, 00:23:04.522 { 00:23:04.522 "method": "bdev_nvme_set_options", 00:23:04.522 "params": { 00:23:04.522 "action_on_timeout": "none", 00:23:04.522 "timeout_us": 0, 00:23:04.522 "timeout_admin_us": 0, 00:23:04.522 "keep_alive_timeout_ms": 10000, 00:23:04.522 "arbitration_burst": 0, 00:23:04.522 "low_priority_weight": 0, 00:23:04.522 "medium_priority_weight": 0, 00:23:04.522 "high_priority_weight": 0, 00:23:04.522 "nvme_adminq_poll_period_us": 10000, 00:23:04.522 "nvme_ioq_poll_period_us": 0, 00:23:04.522 "io_queue_requests": 512, 00:23:04.522 "delay_cmd_submit": true, 00:23:04.522 "transport_retry_count": 4, 00:23:04.522 "bdev_retry_count": 3, 00:23:04.522 "transport_ack_timeout": 0, 00:23:04.522 "ctrlr_loss_timeout_sec": 0, 00:23:04.522 "reconnect_delay_sec": 0, 00:23:04.522 "fast_io_fail_timeout_sec": 0, 00:23:04.522 "disable_auto_failback": false, 00:23:04.522 "generate_uuids": false, 00:23:04.522 "transport_tos": 0, 00:23:04.522 "nvme_error_stat": false, 00:23:04.522 "rdma_srq_size": 0, 00:23:04.522 "io_path_stat": false, 00:23:04.522 "allow_accel_sequence": false, 00:23:04.522 "rdma_max_cq_size": 0, 00:23:04.522 "rdma_cm_event_timeout_ms": 0, 00:23:04.522 "dhchap_digests": [ 00:23:04.522 "sha256", 00:23:04.522 "sha384", 00:23:04.522 "sha512" 00:23:04.522 ], 00:23:04.522 "dhchap_dhgroups": [ 00:23:04.522 "null", 00:23:04.522 "ffdhe2048", 00:23:04.522 "ffdhe3072", 00:23:04.522 "ffdhe4096", 00:23:04.522 "ffdhe6144", 00:23:04.522 "ffdhe8192" 00:23:04.522 ] 00:23:04.522 } 00:23:04.522 }, 00:23:04.522 { 00:23:04.522 "method": "bdev_nvme_attach_controller", 00:23:04.522 "params": { 00:23:04.522 "name": "nvme0", 00:23:04.522 "trtype": "TCP", 00:23:04.522 "adrfam": "IPv4", 00:23:04.522 "traddr": "10.0.0.2", 00:23:04.522 "trsvcid": "4420", 00:23:04.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.522 "prchk_reftag": false, 00:23:04.522 "prchk_guard": false, 00:23:04.522 "ctrlr_loss_timeout_sec": 0, 00:23:04.522 "reconnect_delay_sec": 0, 00:23:04.522 "fast_io_fail_timeout_sec": 0, 00:23:04.522 "psk": "key0", 00:23:04.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:04.522 "hdgst": false, 00:23:04.522 "ddgst": false 00:23:04.522 } 00:23:04.522 }, 00:23:04.522 { 00:23:04.522 "method": "bdev_nvme_set_hotplug", 00:23:04.522 "params": { 00:23:04.522 "period_us": 100000, 00:23:04.522 "enable": false 00:23:04.522 } 00:23:04.522 }, 00:23:04.522 { 00:23:04.522 "method": "bdev_enable_histogram", 00:23:04.522 "params": { 00:23:04.522 "name": "nvme0n1", 00:23:04.522 "enable": true 00:23:04.522 } 00:23:04.522 }, 00:23:04.522 { 00:23:04.522 "method": "bdev_wait_for_examine" 00:23:04.522 } 00:23:04.522 ] 00:23:04.522 }, 00:23:04.522 { 00:23:04.522 "subsystem": "nbd", 00:23:04.522 "config": [] 00:23:04.522 } 00:23:04.522 ] 00:23:04.522 }' 00:23:04.522 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2093964 00:23:04.522 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2093964 ']' 00:23:04.522 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2093964 00:23:04.522 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:04.522 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:04.522 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2093964 00:23:04.522 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:04.522 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:04.522 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2093964' 00:23:04.522 killing process with pid 2093964 00:23:04.522 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2093964 00:23:04.522 Received shutdown signal, test time was about 1.000000 seconds 00:23:04.522 00:23:04.522 Latency(us) 00:23:04.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.522 =================================================================================================================== 00:23:04.522 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:04.522 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2093964 00:23:04.782 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2093906 00:23:04.782 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2093906 ']' 00:23:04.782 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2093906 00:23:04.782 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:04.782 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:04.782 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2093906 00:23:04.782 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:04.782 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:04.782 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2093906' 00:23:04.782 killing process with pid 2093906 00:23:04.782 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2093906 00:23:04.782 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2093906 00:23:05.042 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:05.042 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:05.042 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:05.042 "subsystems": [ 00:23:05.042 { 00:23:05.042 "subsystem": "keyring", 00:23:05.042 "config": [ 00:23:05.042 { 00:23:05.042 "method": "keyring_file_add_key", 00:23:05.042 "params": { 00:23:05.042 "name": "key0", 00:23:05.042 "path": "/tmp/tmp.WvEZEFM6mW" 00:23:05.042 } 00:23:05.042 } 00:23:05.042 ] 00:23:05.042 }, 00:23:05.042 { 00:23:05.042 "subsystem": "iobuf", 00:23:05.042 "config": [ 00:23:05.042 { 00:23:05.042 "method": "iobuf_set_options", 00:23:05.042 "params": { 00:23:05.042 "small_pool_count": 8192, 00:23:05.042 "large_pool_count": 1024, 00:23:05.042 "small_bufsize": 8192, 00:23:05.042 "large_bufsize": 135168 00:23:05.042 } 00:23:05.042 } 00:23:05.042 ] 00:23:05.042 }, 00:23:05.042 { 00:23:05.042 "subsystem": "sock", 00:23:05.042 "config": [ 00:23:05.043 { 00:23:05.043 "method": "sock_set_default_impl", 00:23:05.043 "params": { 00:23:05.043 "impl_name": "posix" 00:23:05.043 } 00:23:05.043 }, 00:23:05.043 { 00:23:05.043 "method": "sock_impl_set_options", 00:23:05.043 "params": { 00:23:05.043 "impl_name": "ssl", 00:23:05.043 "recv_buf_size": 4096, 00:23:05.043 "send_buf_size": 4096, 00:23:05.043 "enable_recv_pipe": true, 00:23:05.043 "enable_quickack": false, 00:23:05.043 "enable_placement_id": 0, 00:23:05.043 "enable_zerocopy_send_server": true, 00:23:05.043 "enable_zerocopy_send_client": false, 00:23:05.043 "zerocopy_threshold": 0, 00:23:05.043 "tls_version": 0, 00:23:05.043 "enable_ktls": false 00:23:05.043 } 00:23:05.043 }, 00:23:05.043 { 00:23:05.043 "method": "sock_impl_set_options", 00:23:05.043 "params": { 00:23:05.043 "impl_name": "posix", 00:23:05.043 "recv_buf_size": 2097152, 00:23:05.043 "send_buf_size": 2097152, 00:23:05.043 "enable_recv_pipe": true, 00:23:05.043 "enable_quickack": false, 00:23:05.043 "enable_placement_id": 0, 00:23:05.043 "enable_zerocopy_send_server": true, 00:23:05.043 "enable_zerocopy_send_client": false, 00:23:05.043 "zerocopy_threshold": 0, 00:23:05.043 "tls_version": 0, 00:23:05.043 "enable_ktls": false 00:23:05.043 } 00:23:05.043 } 00:23:05.043 ] 00:23:05.043 }, 00:23:05.043 { 00:23:05.043 "subsystem": "vmd", 00:23:05.043 "config": [] 00:23:05.043 }, 00:23:05.043 { 00:23:05.043 "subsystem": "accel", 00:23:05.043 "config": [ 00:23:05.043 { 00:23:05.043 "method": "accel_set_options", 00:23:05.043 "params": { 00:23:05.043 "small_cache_size": 128, 00:23:05.043 "large_cache_size": 16, 00:23:05.043 "task_count": 2048, 00:23:05.043 "sequence_count": 2048, 00:23:05.043 "buf_count": 2048 00:23:05.043 } 00:23:05.043 } 00:23:05.043 ] 00:23:05.043 }, 00:23:05.043 { 00:23:05.043 "subsystem": "bdev", 00:23:05.043 "config": [ 00:23:05.043 { 00:23:05.043 "method": "bdev_set_options", 00:23:05.043 "params": { 00:23:05.043 "bdev_io_pool_size": 65535, 00:23:05.043 "bdev_io_cache_size": 256, 00:23:05.043 "bdev_auto_examine": true, 00:23:05.043 "iobuf_small_cache_size": 128, 00:23:05.043 "iobuf_large_cache_size": 16 00:23:05.043 } 00:23:05.043 }, 00:23:05.043 { 00:23:05.043 "method": "bdev_raid_set_options", 00:23:05.043 "params": { 00:23:05.043 "process_window_size_kb": 1024, 00:23:05.043 "process_max_bandwidth_mb_sec": 0 00:23:05.043 } 00:23:05.043 }, 00:23:05.043 { 00:23:05.043 "method": "bdev_iscsi_set_options", 00:23:05.043 "params": { 00:23:05.043 "timeout_sec": 30 00:23:05.043 } 00:23:05.043 }, 00:23:05.043 { 00:23:05.043 "method": "bdev_nvme_set_options", 00:23:05.043 "params": { 00:23:05.043 "action_on_timeout": "none", 00:23:05.043 "timeout_us": 0, 00:23:05.043 "timeout_admin_us": 0, 00:23:05.043 "keep_alive_timeout_ms": 10000, 00:23:05.043 "arbitration_burst": 0, 00:23:05.043 "low_priority_weight": 0, 00:23:05.043 "medium_priority_weight": 0, 00:23:05.043 "high_priority_weight": 0, 00:23:05.043 "nvme_adminq_poll_period_us": 10000, 00:23:05.043 "nvme_ioq_poll_period_us": 0, 00:23:05.043 "io_queue_requests": 0, 00:23:05.043 "delay_cmd_submit": true, 00:23:05.043 "transport_retry_count": 4, 00:23:05.043 "bdev_retry_count": 3, 00:23:05.043 "transport_ack_timeout": 0, 00:23:05.043 "ctrlr_loss_timeout_sec": 0, 00:23:05.043 "reconnect_delay_sec": 0, 00:23:05.043 "fast_io_fail_timeout_sec": 0, 00:23:05.043 "disable_auto_failback": false, 00:23:05.043 "generate_uuids": false, 00:23:05.043 "transport_tos": 0, 00:23:05.043 "nvme_error_stat": false, 00:23:05.043 "rdma_srq_size": 0, 00:23:05.043 "io_path_stat": false, 00:23:05.043 "allow_accel_sequence": false, 00:23:05.043 "rdma_max_cq_size": 0, 00:23:05.043 "rdma_cm_event_timeout_ms": 0, 00:23:05.043 "dhchap_digests": [ 00:23:05.043 "sha256", 00:23:05.043 "sha384", 00:23:05.043 "sha512" 00:23:05.043 ], 00:23:05.043 "dhchap_dhgroups": [ 00:23:05.043 "null", 00:23:05.043 "ffdhe2048", 00:23:05.043 "ffdhe3072", 00:23:05.043 "ffdhe4096", 00:23:05.043 "ffdhe6144", 00:23:05.043 "ffdhe8192" 00:23:05.043 ] 00:23:05.043 } 00:23:05.043 }, 00:23:05.043 { 00:23:05.043 "method": "bdev_nvme_set_hotplug", 00:23:05.043 "params": { 00:23:05.043 "period_us": 100000, 00:23:05.043 "enable": false 00:23:05.043 } 00:23:05.043 }, 00:23:05.043 { 00:23:05.043 "method": "bdev_malloc_create", 00:23:05.043 "params": { 00:23:05.043 "name": "malloc0", 00:23:05.043 "num_blocks": 8192, 00:23:05.043 "block_size": 4096, 00:23:05.043 "physical_block_size": 4096, 00:23:05.043 "uuid": "9b95d33f-b48b-4e16-b231-2a6c072d44a0", 00:23:05.043 "optimal_io_boundary": 0, 00:23:05.043 "md_size": 0, 00:23:05.043 "dif_type": 0, 00:23:05.043 "dif_is_head_of_md": false, 00:23:05.043 "dif_pi_format": 0 00:23:05.043 } 00:23:05.043 }, 00:23:05.043 { 00:23:05.043 "method": "bdev_wait_for_examine" 00:23:05.043 } 00:23:05.043 ] 00:23:05.043 }, 00:23:05.043 { 00:23:05.043 "subsystem": "nbd", 00:23:05.043 "config": [] 00:23:05.043 }, 00:23:05.043 { 00:23:05.043 "subsystem": "scheduler", 00:23:05.043 "config": [ 00:23:05.043 { 00:23:05.043 "method": "framework_set_scheduler", 00:23:05.043 "params": { 00:23:05.043 "name": "static" 00:23:05.043 } 00:23:05.043 } 00:23:05.043 ] 00:23:05.043 }, 00:23:05.043 { 00:23:05.043 "subsystem": "nvmf", 00:23:05.043 "config": [ 00:23:05.043 { 00:23:05.043 "method": "nvmf_set_config", 00:23:05.043 "params": { 00:23:05.043 "discovery_filter": "match_any", 00:23:05.043 "admin_cmd_passthru": { 00:23:05.043 "identify_ctrlr": false 00:23:05.043 }, 00:23:05.043 "dhchap_digests": [ 00:23:05.043 "sha256", 00:23:05.043 "sha384", 00:23:05.043 "sha512" 00:23:05.043 ], 00:23:05.043 "dhchap_dhgroups": [ 00:23:05.043 "null", 00:23:05.043 "ffdhe2048", 00:23:05.043 "ffdhe3072", 00:23:05.043 "ffdhe4096", 00:23:05.043 "ffdhe6144", 00:23:05.043 "ffdhe8192" 00:23:05.043 ] 00:23:05.043 } 00:23:05.043 }, 00:23:05.043 { 00:23:05.043 "method": "nvmf_set_max_subsystems", 00:23:05.044 "params": { 00:23:05.044 "max_subsystems": 1024 00:23:05.044 } 00:23:05.044 }, 00:23:05.044 { 00:23:05.044 "method": "nvmf_set_crdt", 00:23:05.044 "params": { 00:23:05.044 "crdt1": 0, 00:23:05.044 "crdt2": 0, 00:23:05.044 "crdt3": 0 00:23:05.044 } 00:23:05.044 }, 00:23:05.044 { 00:23:05.044 "method": "nvmf_create_transport", 00:23:05.044 "params": { 00:23:05.044 "trtype": "TCP", 00:23:05.044 "max_queue_depth": 128, 00:23:05.044 "max_io_qpairs_per_ctrlr": 127, 00:23:05.044 "in_capsule_data_size": 4096, 00:23:05.044 "max_io_size": 131072, 00:23:05.044 "io_unit_size": 131072, 00:23:05.044 "max_aq_depth": 128, 00:23:05.044 "num_shared_buffers": 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:05.044 511, 00:23:05.044 "buf_cache_size": 4294967295, 00:23:05.044 "dif_insert_or_strip": false, 00:23:05.044 "zcopy": false, 00:23:05.044 "c2h_success": false, 00:23:05.044 "sock_priority": 0, 00:23:05.044 "abort_timeout_sec": 1, 00:23:05.044 "ack_timeout": 0, 00:23:05.044 "data_wr_pool_size": 0 00:23:05.044 } 00:23:05.044 }, 00:23:05.044 { 00:23:05.044 "method": "nvmf_create_subsystem", 00:23:05.044 "params": { 00:23:05.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.044 "allow_any_host": false, 00:23:05.044 "serial_number": "00000000000000000000", 00:23:05.044 "model_number": "SPDK bdev Controller", 00:23:05.044 "max_namespaces": 32, 00:23:05.044 "min_cntlid": 1, 00:23:05.044 "max_cntlid": 65519, 00:23:05.044 "ana_reporting": false 00:23:05.044 } 00:23:05.044 }, 00:23:05.044 { 00:23:05.044 "method": "nvmf_subsystem_add_host", 00:23:05.044 "params": { 00:23:05.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.044 "host": "nqn.2016-06.io.spdk:host1", 00:23:05.044 "psk": "key0" 00:23:05.044 } 00:23:05.044 }, 00:23:05.044 { 00:23:05.044 "method": "nvmf_subsystem_add_ns", 00:23:05.044 "params": { 00:23:05.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.044 "namespace": { 00:23:05.044 "nsid": 1, 00:23:05.044 "bdev_name": "malloc0", 00:23:05.044 "nguid": "9B95D33FB48B4E16B2312A6C072D44A0", 00:23:05.044 "uuid": "9b95d33f-b48b-4e16-b231-2a6c072d44a0", 00:23:05.044 "no_auto_visible": false 00:23:05.044 } 00:23:05.044 } 00:23:05.044 }, 00:23:05.044 { 00:23:05.044 "method": "nvmf_subsystem_add_listener", 00:23:05.044 "params": { 00:23:05.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.044 "listen_address": { 00:23:05.044 "trtype": "TCP", 00:23:05.044 "adrfam": "IPv4", 00:23:05.044 "traddr": "10.0.0.2", 00:23:05.044 "trsvcid": "4420" 00:23:05.044 }, 00:23:05.044 "secure_channel": false, 00:23:05.044 "sock_impl": "ssl" 00:23:05.044 } 00:23:05.044 } 00:23:05.044 ] 00:23:05.044 } 00:23:05.044 ] 00:23:05.044 }' 00:23:05.044 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.044 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2094519 00:23:05.044 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:05.044 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2094519 00:23:05.044 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2094519 ']' 00:23:05.044 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.044 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:05.044 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.044 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:05.044 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.044 [2024-10-06 11:18:02.484954] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:23:05.044 [2024-10-06 11:18:02.485001] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.044 [2024-10-06 11:18:02.544212] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.044 [2024-10-06 11:18:02.583573] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.044 [2024-10-06 11:18:02.583615] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.044 [2024-10-06 11:18:02.583622] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.044 [2024-10-06 11:18:02.583628] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.044 [2024-10-06 11:18:02.583633] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.044 [2024-10-06 11:18:02.584201] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.304 [2024-10-06 11:18:02.805323] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.304 [2024-10-06 11:18:02.837346] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:05.304 [2024-10-06 11:18:02.837542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.872 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:05.872 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:05.872 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:05.872 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:05.872 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.872 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.872 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2094761 00:23:05.872 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2094761 /var/tmp/bdevperf.sock 00:23:05.872 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2094761 ']' 00:23:05.872 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.872 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:05.872 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:05.872 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.872 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:05.872 "subsystems": [ 00:23:05.872 { 00:23:05.872 "subsystem": "keyring", 00:23:05.872 "config": [ 00:23:05.872 { 00:23:05.872 "method": "keyring_file_add_key", 00:23:05.872 "params": { 00:23:05.872 "name": "key0", 00:23:05.872 "path": "/tmp/tmp.WvEZEFM6mW" 00:23:05.872 } 00:23:05.872 } 00:23:05.872 ] 00:23:05.872 }, 00:23:05.872 { 00:23:05.872 "subsystem": "iobuf", 00:23:05.872 "config": [ 00:23:05.872 { 00:23:05.872 "method": "iobuf_set_options", 00:23:05.872 "params": { 00:23:05.872 "small_pool_count": 8192, 00:23:05.872 "large_pool_count": 1024, 00:23:05.872 "small_bufsize": 8192, 00:23:05.872 "large_bufsize": 135168 00:23:05.872 } 00:23:05.873 } 00:23:05.873 ] 00:23:05.873 }, 00:23:05.873 { 00:23:05.873 "subsystem": "sock", 00:23:05.873 "config": [ 00:23:05.873 { 00:23:05.873 "method": "sock_set_default_impl", 00:23:05.873 "params": { 00:23:05.873 "impl_name": "posix" 00:23:05.873 } 00:23:05.873 }, 00:23:05.873 { 00:23:05.873 "method": "sock_impl_set_options", 00:23:05.873 "params": { 00:23:05.873 "impl_name": "ssl", 00:23:05.873 "recv_buf_size": 4096, 00:23:05.873 "send_buf_size": 4096, 00:23:05.873 "enable_recv_pipe": true, 00:23:05.873 "enable_quickack": false, 00:23:05.873 "enable_placement_id": 0, 00:23:05.873 "enable_zerocopy_send_server": true, 00:23:05.873 "enable_zerocopy_send_client": false, 00:23:05.873 "zerocopy_threshold": 0, 00:23:05.873 "tls_version": 0, 00:23:05.873 "enable_ktls": false 00:23:05.873 } 00:23:05.873 }, 00:23:05.873 { 00:23:05.873 "method": "sock_impl_set_options", 00:23:05.873 "params": { 00:23:05.873 "impl_name": "posix", 00:23:05.873 "recv_buf_size": 2097152, 00:23:05.873 "send_buf_size": 2097152, 00:23:05.873 "enable_recv_pipe": true, 00:23:05.873 "enable_quickack": false, 00:23:05.873 "enable_placement_id": 0, 00:23:05.873 "enable_zerocopy_send_server": true, 00:23:05.873 "enable_zerocopy_send_client": false, 00:23:05.873 "zerocopy_threshold": 0, 00:23:05.873 "tls_version": 0, 00:23:05.873 "enable_ktls": false 00:23:05.873 } 00:23:05.873 } 00:23:05.873 ] 00:23:05.873 }, 00:23:05.873 { 00:23:05.873 "subsystem": "vmd", 00:23:05.873 "config": [] 00:23:05.873 }, 00:23:05.873 { 00:23:05.873 "subsystem": "accel", 00:23:05.873 "config": [ 00:23:05.873 { 00:23:05.873 "method": "accel_set_options", 00:23:05.873 "params": { 00:23:05.873 "small_cache_size": 128, 00:23:05.873 "large_cache_size": 16, 00:23:05.873 "task_count": 2048, 00:23:05.873 "sequence_count": 2048, 00:23:05.873 "buf_count": 2048 00:23:05.873 } 00:23:05.873 } 00:23:05.873 ] 00:23:05.873 }, 00:23:05.873 { 00:23:05.873 "subsystem": "bdev", 00:23:05.873 "config": [ 00:23:05.873 { 00:23:05.873 "method": "bdev_set_options", 00:23:05.873 "params": { 00:23:05.873 "bdev_io_pool_size": 65535, 00:23:05.873 "bdev_io_cache_size": 256, 00:23:05.873 "bdev_auto_examine": true, 00:23:05.873 "iobuf_small_cache_size": 128, 00:23:05.873 "iobuf_large_cache_size": 16 00:23:05.873 } 00:23:05.873 }, 00:23:05.873 { 00:23:05.873 "method": "bdev_raid_set_options", 00:23:05.873 "params": { 00:23:05.873 "process_window_size_kb": 1024, 00:23:05.873 "process_max_bandwidth_mb_sec": 0 00:23:05.873 } 00:23:05.873 }, 00:23:05.873 { 00:23:05.873 "method": "bdev_iscsi_set_options", 00:23:05.873 "params": { 00:23:05.873 "timeout_sec": 30 00:23:05.873 } 00:23:05.873 }, 00:23:05.873 { 00:23:05.873 "method": "bdev_nvme_set_options", 00:23:05.873 "params": { 00:23:05.873 "action_on_timeout": "none", 00:23:05.873 "timeout_us": 0, 00:23:05.873 "timeout_admin_us": 0, 00:23:05.873 "keep_alive_timeout_ms": 10000, 00:23:05.873 "arbitration_burst": 0, 00:23:05.873 "low_priority_weight": 0, 00:23:05.873 "medium_priority_weight": 0, 00:23:05.873 "high_priority_weight": 0, 00:23:05.873 "nvme_adminq_poll_period_us": 10000, 00:23:05.873 "nvme_ioq_poll_period_us": 0, 00:23:05.873 "io_queue_requests": 512, 00:23:05.873 "delay_cmd_submit": true, 00:23:05.873 "transport_retry_count": 4, 00:23:05.873 "bdev_retry_count": 3, 00:23:05.873 "transport_ack_timeout": 0, 00:23:05.873 "ctrlr_loss_timeout_sec": 0, 00:23:05.873 "reconnect_delay_sec": 0, 00:23:05.873 "fast_io_fail_timeout_sec": 0, 00:23:05.873 "disable_auto_failback": false, 00:23:05.873 "generate_uuids": false, 00:23:05.873 "transport_tos": 0, 00:23:05.873 "nvme_error_stat": false, 00:23:05.873 "rdma_srq_size": 0, 00:23:05.873 "io_path_stat": false, 00:23:05.873 "allow_accel_sequence": false, 00:23:05.873 "rdma_max_cq_size": 0, 00:23:05.873 "rdma_cm_event_timeout_ms": 0, 00:23:05.873 "dhchap_digests": [ 00:23:05.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.873 "sha256", 00:23:05.873 "sha384", 00:23:05.873 "sha512" 00:23:05.873 ], 00:23:05.873 "dhchap_dhgroups": [ 00:23:05.873 "null", 00:23:05.873 "ffdhe2048", 00:23:05.873 "ffdhe3072", 00:23:05.873 "ffdhe4096", 00:23:05.873 "ffdhe6144", 00:23:05.873 "ffdhe8192" 00:23:05.873 ] 00:23:05.873 } 00:23:05.873 }, 00:23:05.873 { 00:23:05.873 "method": "bdev_nvme_attach_controller", 00:23:05.873 "params": { 00:23:05.873 "name": "nvme0", 00:23:05.873 "trtype": "TCP", 00:23:05.873 "adrfam": "IPv4", 00:23:05.873 "traddr": "10.0.0.2", 00:23:05.873 "trsvcid": "4420", 00:23:05.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.873 "prchk_reftag": false, 00:23:05.873 "prchk_guard": false, 00:23:05.873 "ctrlr_loss_timeout_sec": 0, 00:23:05.873 "reconnect_delay_sec": 0, 00:23:05.873 "fast_io_fail_timeout_sec": 0, 00:23:05.873 "psk": "key0", 00:23:05.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:05.873 "hdgst": false, 00:23:05.873 "ddgst": false 00:23:05.873 } 00:23:05.873 }, 00:23:05.873 { 00:23:05.873 "method": "bdev_nvme_set_hotplug", 00:23:05.873 "params": { 00:23:05.873 "period_us": 100000, 00:23:05.873 "enable": false 00:23:05.873 } 00:23:05.873 }, 00:23:05.873 { 00:23:05.873 "method": "bdev_enable_histogram", 00:23:05.873 "params": { 00:23:05.873 "name": "nvme0n1", 00:23:05.873 "enable": true 00:23:05.873 } 00:23:05.873 }, 00:23:05.873 { 00:23:05.873 "method": "bdev_wait_for_examine" 00:23:05.873 } 00:23:05.873 ] 00:23:05.873 }, 00:23:05.873 { 00:23:05.873 "subsystem": "nbd", 00:23:05.873 "config": [] 00:23:05.873 } 00:23:05.873 ] 00:23:05.873 }' 00:23:05.873 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:05.873 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.873 [2024-10-06 11:18:03.399773] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:23:05.873 [2024-10-06 11:18:03.399817] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2094761 ] 00:23:06.132 [2024-10-06 11:18:03.454621] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.132 [2024-10-06 11:18:03.494901] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.132 [2024-10-06 11:18:03.640682] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:06.700 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:06.700 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:06.700 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:06.701 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:06.959 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.959 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:06.959 Running I/O for 1 seconds... 00:23:08.335 2856.00 IOPS, 11.16 MiB/s 00:23:08.335 Latency(us) 00:23:08.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.335 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:08.335 Verification LBA range: start 0x0 length 0x2000 00:23:08.335 nvme0n1 : 1.02 2911.26 11.37 0.00 0.00 43476.23 4712.35 65910.49 00:23:08.335 =================================================================================================================== 00:23:08.335 Total : 2911.26 11.37 0.00 0.00 43476.23 4712.35 65910.49 00:23:08.335 { 00:23:08.335 "results": [ 00:23:08.335 { 00:23:08.335 "job": "nvme0n1", 00:23:08.335 "core_mask": "0x2", 00:23:08.335 "workload": "verify", 00:23:08.335 "status": "finished", 00:23:08.335 "verify_range": { 00:23:08.335 "start": 0, 00:23:08.335 "length": 8192 00:23:08.335 }, 00:23:08.335 "queue_depth": 128, 00:23:08.335 "io_size": 4096, 00:23:08.335 "runtime": 1.024984, 00:23:08.335 "iops": 2911.2649563310256, 00:23:08.335 "mibps": 11.372128735668069, 00:23:08.335 "io_failed": 0, 00:23:08.335 "io_timeout": 0, 00:23:08.335 "avg_latency_us": 43476.22932720541, 00:23:08.335 "min_latency_us": 4712.350476190476, 00:23:08.335 "max_latency_us": 65910.49142857143 00:23:08.335 } 00:23:08.335 ], 00:23:08.335 "core_count": 1 00:23:08.335 } 00:23:08.335 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:08.335 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:08.335 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:08.335 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:23:08.335 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:23:08.335 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:08.335 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:08.335 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:08.335 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:08.335 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:08.335 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:08.335 nvmf_trace.0 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2094761 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2094761 ']' 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2094761 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2094761 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2094761' 00:23:08.336 killing process with pid 2094761 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2094761 00:23:08.336 Received shutdown signal, test time was about 1.000000 seconds 00:23:08.336 00:23:08.336 Latency(us) 00:23:08.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.336 =================================================================================================================== 00:23:08.336 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2094761 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:08.336 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:08.336 rmmod nvme_tcp 00:23:08.336 rmmod nvme_fabrics 00:23:08.336 rmmod nvme_keyring 00:23:08.595 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:08.595 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:08.595 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:08.595 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 2094519 ']' 00:23:08.595 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 2094519 00:23:08.595 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2094519 ']' 00:23:08.595 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2094519 00:23:08.595 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:08.595 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:08.595 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2094519 00:23:08.595 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:08.595 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:08.595 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2094519' 00:23:08.595 killing process with pid 2094519 00:23:08.595 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2094519 00:23:08.595 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2094519 00:23:08.595 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:08.595 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:08.595 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:08.595 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:08.595 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:23:08.854 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:08.854 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:23:08.854 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:08.854 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:08.854 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.854 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.854 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.761 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:10.761 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.cBMCwkuZro /tmp/tmp.EGSEq0Dgik /tmp/tmp.WvEZEFM6mW 00:23:10.761 00:23:10.761 real 1m17.898s 00:23:10.761 user 2m0.385s 00:23:10.761 sys 0m29.036s 00:23:10.761 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:10.761 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.761 ************************************ 00:23:10.761 END TEST nvmf_tls 00:23:10.761 ************************************ 00:23:10.761 11:18:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:10.761 11:18:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:10.761 11:18:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:10.761 11:18:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:10.761 ************************************ 00:23:10.761 START TEST nvmf_fips 00:23:10.761 ************************************ 00:23:10.761 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:11.020 * Looking for test storage... 00:23:11.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:11.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.020 --rc genhtml_branch_coverage=1 00:23:11.020 --rc genhtml_function_coverage=1 00:23:11.020 --rc genhtml_legend=1 00:23:11.020 --rc geninfo_all_blocks=1 00:23:11.020 --rc geninfo_unexecuted_blocks=1 00:23:11.020 00:23:11.020 ' 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:11.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.020 --rc genhtml_branch_coverage=1 00:23:11.020 --rc genhtml_function_coverage=1 00:23:11.020 --rc genhtml_legend=1 00:23:11.020 --rc geninfo_all_blocks=1 00:23:11.020 --rc geninfo_unexecuted_blocks=1 00:23:11.020 00:23:11.020 ' 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:11.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.020 --rc genhtml_branch_coverage=1 00:23:11.020 --rc genhtml_function_coverage=1 00:23:11.020 --rc genhtml_legend=1 00:23:11.020 --rc geninfo_all_blocks=1 00:23:11.020 --rc geninfo_unexecuted_blocks=1 00:23:11.020 00:23:11.020 ' 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:11.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.020 --rc genhtml_branch_coverage=1 00:23:11.020 --rc genhtml_function_coverage=1 00:23:11.020 --rc genhtml_legend=1 00:23:11.020 --rc geninfo_all_blocks=1 00:23:11.020 --rc geninfo_unexecuted_blocks=1 00:23:11.020 00:23:11.020 ' 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:11.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:11.020 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:11.021 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:23:11.280 Error setting digest 00:23:11.280 400295EB457F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:11.280 400295EB457F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:11.280 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.554 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:16.555 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:16.555 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:16.555 Found net devices under 0000:af:00.0: cvl_0_0 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:16.555 Found net devices under 0000:af:00.1: cvl_0_1 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.555 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.555 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:16.555 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.555 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.555 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.555 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:16.555 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:16.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:23:16.555 00:23:16.555 --- 10.0.0.2 ping statistics --- 00:23:16.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.555 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:23:16.555 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:23:16.555 00:23:16.555 --- 10.0.0.1 ping statistics --- 00:23:16.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.555 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:23:16.555 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.555 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:23:16.555 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:16.555 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.555 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:16.555 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:16.555 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.555 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:16.555 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:16.815 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:16.815 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:16.815 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:16.815 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:16.815 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:16.815 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=2099089 00:23:16.815 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 2099089 00:23:16.815 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2099089 ']' 00:23:16.815 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.815 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:16.815 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.815 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:16.815 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:16.815 [2024-10-06 11:18:14.212339] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:23:16.815 [2024-10-06 11:18:14.212389] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.815 [2024-10-06 11:18:14.269260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.815 [2024-10-06 11:18:14.307148] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.815 [2024-10-06 11:18:14.307188] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.815 [2024-10-06 11:18:14.307196] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.815 [2024-10-06 11:18:14.307202] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.815 [2024-10-06 11:18:14.307206] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.815 [2024-10-06 11:18:14.307759] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.074 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:17.074 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:17.074 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:17.074 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:17.074 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:17.074 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.074 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:17.074 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:17.074 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:17.074 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.kFY 00:23:17.074 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:17.074 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.kFY 00:23:17.074 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.kFY 00:23:17.074 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.kFY 00:23:17.074 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:17.074 [2024-10-06 11:18:14.609920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.074 [2024-10-06 11:18:14.625929] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:17.074 [2024-10-06 11:18:14.626127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.334 malloc0 00:23:17.334 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:17.334 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2099121 00:23:17.334 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:17.334 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2099121 /var/tmp/bdevperf.sock 00:23:17.334 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2099121 ']' 00:23:17.334 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.334 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:17.334 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.334 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:17.334 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:17.334 [2024-10-06 11:18:14.759851] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:23:17.334 [2024-10-06 11:18:14.759902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2099121 ] 00:23:17.334 [2024-10-06 11:18:14.809280] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.334 [2024-10-06 11:18:14.847936] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.593 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:17.593 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:17.593 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.kFY 00:23:17.593 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:17.851 [2024-10-06 11:18:15.305043] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:17.851 TLSTESTn1 00:23:17.851 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:18.109 Running I/O for 10 seconds... 00:23:28.359 5254.00 IOPS, 20.52 MiB/s 5497.00 IOPS, 21.47 MiB/s 5486.67 IOPS, 21.43 MiB/s 5575.50 IOPS, 21.78 MiB/s 5531.60 IOPS, 21.61 MiB/s 5579.50 IOPS, 21.79 MiB/s 5599.14 IOPS, 21.87 MiB/s 5633.12 IOPS, 22.00 MiB/s 5599.22 IOPS, 21.87 MiB/s 5313.40 IOPS, 20.76 MiB/s 00:23:28.359 Latency(us) 00:23:28.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.359 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:28.359 Verification LBA range: start 0x0 length 0x2000 00:23:28.359 TLSTESTn1 : 10.03 5311.54 20.75 0.00 0.00 24057.00 4837.18 55424.73 00:23:28.359 =================================================================================================================== 00:23:28.359 Total : 5311.54 20.75 0.00 0.00 24057.00 4837.18 55424.73 00:23:28.359 { 00:23:28.359 "results": [ 00:23:28.359 { 00:23:28.359 "job": "TLSTESTn1", 00:23:28.359 "core_mask": "0x4", 00:23:28.359 "workload": "verify", 00:23:28.359 "status": "finished", 00:23:28.359 "verify_range": { 00:23:28.359 "start": 0, 00:23:28.359 "length": 8192 00:23:28.359 }, 00:23:28.359 "queue_depth": 128, 00:23:28.359 "io_size": 4096, 00:23:28.359 "runtime": 10.027606, 00:23:28.360 "iops": 5311.536971037754, 00:23:28.360 "mibps": 20.748191293116225, 00:23:28.360 "io_failed": 0, 00:23:28.360 "io_timeout": 0, 00:23:28.360 "avg_latency_us": 24057.003854083407, 00:23:28.360 "min_latency_us": 4837.1809523809525, 00:23:28.360 "max_latency_us": 55424.73142857143 00:23:28.360 } 00:23:28.360 ], 00:23:28.360 "core_count": 1 00:23:28.360 } 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:28.360 nvmf_trace.0 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2099121 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2099121 ']' 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2099121 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2099121 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2099121' 00:23:28.360 killing process with pid 2099121 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2099121 00:23:28.360 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.360 00:23:28.360 Latency(us) 00:23:28.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.360 =================================================================================================================== 00:23:28.360 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2099121 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.360 rmmod nvme_tcp 00:23:28.360 rmmod nvme_fabrics 00:23:28.360 rmmod nvme_keyring 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 2099089 ']' 00:23:28.360 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 2099089 00:23:28.618 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2099089 ']' 00:23:28.618 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2099089 00:23:28.618 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:28.618 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.618 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2099089 00:23:28.618 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:28.618 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:28.619 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2099089' 00:23:28.619 killing process with pid 2099089 00:23:28.619 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2099089 00:23:28.619 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2099089 00:23:28.619 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:28.619 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:28.619 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:28.619 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:28.619 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:23:28.619 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:23:28.619 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:28.878 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:28.878 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:28.878 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.878 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.878 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.783 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:30.783 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.kFY 00:23:30.783 00:23:30.783 real 0m19.932s 00:23:30.783 user 0m21.408s 00:23:30.783 sys 0m8.916s 00:23:30.783 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:30.783 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:30.783 ************************************ 00:23:30.783 END TEST nvmf_fips 00:23:30.783 ************************************ 00:23:30.783 11:18:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:30.783 11:18:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:30.783 11:18:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:30.783 11:18:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:30.783 ************************************ 00:23:30.783 START TEST nvmf_control_msg_list 00:23:30.783 ************************************ 00:23:30.783 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:31.042 * Looking for test storage... 00:23:31.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:31.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.042 --rc genhtml_branch_coverage=1 00:23:31.042 --rc genhtml_function_coverage=1 00:23:31.042 --rc genhtml_legend=1 00:23:31.042 --rc geninfo_all_blocks=1 00:23:31.042 --rc geninfo_unexecuted_blocks=1 00:23:31.042 00:23:31.042 ' 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:31.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.042 --rc genhtml_branch_coverage=1 00:23:31.042 --rc genhtml_function_coverage=1 00:23:31.042 --rc genhtml_legend=1 00:23:31.042 --rc geninfo_all_blocks=1 00:23:31.042 --rc geninfo_unexecuted_blocks=1 00:23:31.042 00:23:31.042 ' 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:31.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.042 --rc genhtml_branch_coverage=1 00:23:31.042 --rc genhtml_function_coverage=1 00:23:31.042 --rc genhtml_legend=1 00:23:31.042 --rc geninfo_all_blocks=1 00:23:31.042 --rc geninfo_unexecuted_blocks=1 00:23:31.042 00:23:31.042 ' 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:31.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.042 --rc genhtml_branch_coverage=1 00:23:31.042 --rc genhtml_function_coverage=1 00:23:31.042 --rc genhtml_legend=1 00:23:31.042 --rc geninfo_all_blocks=1 00:23:31.042 --rc geninfo_unexecuted_blocks=1 00:23:31.042 00:23:31.042 ' 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:31.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:23:31.042 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:36.319 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:36.319 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:23:36.319 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:36.319 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:36.319 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:36.319 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:36.319 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:36.320 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:36.320 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:36.320 Found net devices under 0000:af:00.0: cvl_0_0 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:36.320 Found net devices under 0000:af:00.1: cvl_0_1 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:36.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:23:36.320 00:23:36.320 --- 10.0.0.2 ping statistics --- 00:23:36.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.320 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:23:36.320 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:36.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:23:36.320 00:23:36.320 --- 10.0.0.1 ping statistics --- 00:23:36.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.320 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=2104316 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 2104316 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 2104316 ']' 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:36.321 [2024-10-06 11:18:33.642541] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:23:36.321 [2024-10-06 11:18:33.642586] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.321 [2024-10-06 11:18:33.701201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.321 [2024-10-06 11:18:33.739902] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.321 [2024-10-06 11:18:33.739942] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.321 [2024-10-06 11:18:33.739949] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.321 [2024-10-06 11:18:33.739954] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.321 [2024-10-06 11:18:33.739959] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.321 [2024-10-06 11:18:33.740490] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:36.321 [2024-10-06 11:18:33.865749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.321 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:36.581 Malloc0 00:23:36.581 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.581 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:36.581 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.581 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:36.581 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.581 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:36.581 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.581 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:36.581 [2024-10-06 11:18:33.918937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.581 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.581 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2104385 00:23:36.581 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:36.581 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2104386 00:23:36.581 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2104387 00:23:36.581 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:36.581 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2104385 00:23:36.581 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:36.581 [2024-10-06 11:18:33.987499] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:36.581 [2024-10-06 11:18:33.987697] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:36.581 [2024-10-06 11:18:33.987873] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:37.961 Initializing NVMe Controllers 00:23:37.961 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:37.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:37.961 Initialization complete. Launching workers. 00:23:37.961 ======================================================== 00:23:37.961 Latency(us) 00:23:37.961 Device Information : IOPS MiB/s Average min max 00:23:37.961 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40971.36 40758.64 41974.10 00:23:37.961 ======================================================== 00:23:37.961 Total : 25.00 0.10 40971.36 40758.64 41974.10 00:23:37.961 00:23:37.961 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2104386 00:23:37.961 Initializing NVMe Controllers 00:23:37.961 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:37.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:37.961 Initialization complete. Launching workers. 00:23:37.961 ======================================================== 00:23:37.961 Latency(us) 00:23:37.962 Device Information : IOPS MiB/s Average min max 00:23:37.962 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40932.56 40719.80 41870.87 00:23:37.962 ======================================================== 00:23:37.962 Total : 25.00 0.10 40932.56 40719.80 41870.87 00:23:37.962 00:23:37.962 Initializing NVMe Controllers 00:23:37.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:37.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:37.962 Initialization complete. Launching workers. 00:23:37.962 ======================================================== 00:23:37.962 Latency(us) 00:23:37.962 Device Information : IOPS MiB/s Average min max 00:23:37.962 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40926.18 40811.32 41501.54 00:23:37.962 ======================================================== 00:23:37.962 Total : 25.00 0.10 40926.18 40811.32 41501.54 00:23:37.962 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2104387 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:37.962 rmmod nvme_tcp 00:23:37.962 rmmod nvme_fabrics 00:23:37.962 rmmod nvme_keyring 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 2104316 ']' 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 2104316 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 2104316 ']' 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 2104316 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2104316 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2104316' 00:23:37.962 killing process with pid 2104316 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 2104316 00:23:37.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 2104316 00:23:38.221 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:38.221 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:38.221 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:38.221 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:38.221 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:23:38.221 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:38.221 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:23:38.221 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:38.221 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:38.221 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.222 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.222 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.128 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:40.128 00:23:40.128 real 0m9.302s 00:23:40.128 user 0m6.630s 00:23:40.128 sys 0m4.696s 00:23:40.128 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:40.128 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:40.128 ************************************ 00:23:40.128 END TEST nvmf_control_msg_list 00:23:40.128 ************************************ 00:23:40.128 11:18:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:40.128 11:18:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:40.128 11:18:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:40.128 11:18:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:40.388 ************************************ 00:23:40.388 START TEST nvmf_wait_for_buf 00:23:40.388 ************************************ 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:40.388 * Looking for test storage... 00:23:40.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:40.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.388 --rc genhtml_branch_coverage=1 00:23:40.388 --rc genhtml_function_coverage=1 00:23:40.388 --rc genhtml_legend=1 00:23:40.388 --rc geninfo_all_blocks=1 00:23:40.388 --rc geninfo_unexecuted_blocks=1 00:23:40.388 00:23:40.388 ' 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:40.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.388 --rc genhtml_branch_coverage=1 00:23:40.388 --rc genhtml_function_coverage=1 00:23:40.388 --rc genhtml_legend=1 00:23:40.388 --rc geninfo_all_blocks=1 00:23:40.388 --rc geninfo_unexecuted_blocks=1 00:23:40.388 00:23:40.388 ' 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:40.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.388 --rc genhtml_branch_coverage=1 00:23:40.388 --rc genhtml_function_coverage=1 00:23:40.388 --rc genhtml_legend=1 00:23:40.388 --rc geninfo_all_blocks=1 00:23:40.388 --rc geninfo_unexecuted_blocks=1 00:23:40.388 00:23:40.388 ' 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:40.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.388 --rc genhtml_branch_coverage=1 00:23:40.388 --rc genhtml_function_coverage=1 00:23:40.388 --rc genhtml_legend=1 00:23:40.388 --rc geninfo_all_blocks=1 00:23:40.388 --rc geninfo_unexecuted_blocks=1 00:23:40.388 00:23:40.388 ' 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.388 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:40.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:40.389 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:45.667 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:45.667 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:45.667 Found net devices under 0000:af:00.0: cvl_0_0 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:45.667 Found net devices under 0000:af:00.1: cvl_0_1 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.667 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:45.668 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.668 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.668 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:45.668 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:45.668 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.668 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.668 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:45.668 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:45.668 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.668 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.668 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.668 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.668 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:45.668 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:45.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:23:45.668 00:23:45.668 --- 10.0.0.2 ping statistics --- 00:23:45.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.668 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:23:45.668 00:23:45.668 --- 10.0.0.1 ping statistics --- 00:23:45.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.668 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=2107884 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 2107884 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 2107884 ']' 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:45.668 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.668 [2024-10-06 11:18:43.174722] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:23:45.668 [2024-10-06 11:18:43.174763] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.668 [2024-10-06 11:18:43.233027] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.928 [2024-10-06 11:18:43.271801] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.928 [2024-10-06 11:18:43.271840] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.928 [2024-10-06 11:18:43.271847] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.928 [2024-10-06 11:18:43.271853] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.928 [2024-10-06 11:18:43.271858] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.928 [2024-10-06 11:18:43.272377] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.928 Malloc0 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.928 [2024-10-06 11:18:43.443741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.928 [2024-10-06 11:18:43.467935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.928 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:46.187 [2024-10-06 11:18:43.536705] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:47.566 Initializing NVMe Controllers 00:23:47.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:47.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:23:47.566 Initialization complete. Launching workers. 00:23:47.566 ======================================================== 00:23:47.566 Latency(us) 00:23:47.566 Device Information : IOPS MiB/s Average min max 00:23:47.566 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32271.27 7305.63 63856.38 00:23:47.566 ======================================================== 00:23:47.566 Total : 129.00 16.12 32271.27 7305.63 63856.38 00:23:47.566 00:23:47.567 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:23:47.567 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:23:47.567 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.567 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:47.567 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.567 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:23:47.567 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:23:47.567 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:47.567 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:23:47.567 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:47.567 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:23:47.567 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:47.567 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:23:47.567 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:47.567 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:47.567 rmmod nvme_tcp 00:23:47.567 rmmod nvme_fabrics 00:23:47.567 rmmod nvme_keyring 00:23:47.567 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:47.567 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:23:47.567 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:23:47.567 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 2107884 ']' 00:23:47.567 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 2107884 00:23:47.567 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 2107884 ']' 00:23:47.567 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 2107884 00:23:47.567 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:23:47.567 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:47.567 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2107884 00:23:47.567 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:47.567 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:47.567 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2107884' 00:23:47.567 killing process with pid 2107884 00:23:47.567 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 2107884 00:23:47.567 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 2107884 00:23:47.827 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:47.827 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:47.827 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:47.827 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:23:47.827 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:23:47.827 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:47.827 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:23:47.827 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:47.827 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:47.827 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.827 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.827 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.731 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:49.991 00:23:49.991 real 0m9.602s 00:23:49.991 user 0m3.617s 00:23:49.991 sys 0m4.389s 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:49.991 ************************************ 00:23:49.991 END TEST nvmf_wait_for_buf 00:23:49.991 ************************************ 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:49.991 ************************************ 00:23:49.991 START TEST nvmf_fuzz 00:23:49.991 ************************************ 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:49.991 * Looking for test storage... 00:23:49.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:49.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.991 --rc genhtml_branch_coverage=1 00:23:49.991 --rc genhtml_function_coverage=1 00:23:49.991 --rc genhtml_legend=1 00:23:49.991 --rc geninfo_all_blocks=1 00:23:49.991 --rc geninfo_unexecuted_blocks=1 00:23:49.991 00:23:49.991 ' 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:49.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.991 --rc genhtml_branch_coverage=1 00:23:49.991 --rc genhtml_function_coverage=1 00:23:49.991 --rc genhtml_legend=1 00:23:49.991 --rc geninfo_all_blocks=1 00:23:49.991 --rc geninfo_unexecuted_blocks=1 00:23:49.991 00:23:49.991 ' 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:49.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.991 --rc genhtml_branch_coverage=1 00:23:49.991 --rc genhtml_function_coverage=1 00:23:49.991 --rc genhtml_legend=1 00:23:49.991 --rc geninfo_all_blocks=1 00:23:49.991 --rc geninfo_unexecuted_blocks=1 00:23:49.991 00:23:49.991 ' 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:49.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.991 --rc genhtml_branch_coverage=1 00:23:49.991 --rc genhtml_function_coverage=1 00:23:49.991 --rc genhtml_legend=1 00:23:49.991 --rc geninfo_all_blocks=1 00:23:49.991 --rc geninfo_unexecuted_blocks=1 00:23:49.991 00:23:49.991 ' 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:49.991 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:49.992 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.992 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.992 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.992 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.992 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.992 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.992 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.992 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.992 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.992 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.992 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:49.992 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:49.992 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.992 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.992 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:49.992 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.992 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:49.992 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:23:50.251 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.251 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.251 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.251 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.251 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.251 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.251 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:50.251 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.251 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:23:50.251 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:50.251 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:50.251 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.251 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.251 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.251 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:50.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:50.252 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:50.252 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:50.252 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:50.252 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:50.252 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:50.252 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.252 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:50.252 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:50.252 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:50.252 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.252 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.252 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.252 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:50.252 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:50.252 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:23:50.252 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:55.523 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:55.523 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:55.523 Found net devices under 0000:af:00.0: cvl_0_0 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.523 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:55.524 Found net devices under 0000:af:00.1: cvl_0_1 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # is_hw=yes 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:55.524 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.524 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.524 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.524 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:55.524 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:55.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:23:55.524 00:23:55.524 --- 10.0.0.2 ping statistics --- 00:23:55.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.524 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:23:55.524 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:23:55.524 00:23:55.524 --- 10.0.0.1 ping statistics --- 00:23:55.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.524 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:23:55.524 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.524 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # return 0 00:23:55.524 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:55.524 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.524 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:55.524 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:55.524 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.524 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:55.524 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2111682 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2111682 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2111682 ']' 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.782 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:56.041 Malloc0 00:23:56.041 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.041 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:56.041 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.041 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:56.041 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.041 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:56.041 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.041 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:56.041 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.041 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:56.041 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.041 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:56.041 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.041 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:56.041 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:28.305 Fuzzing completed. Shutting down the fuzz application 00:24:28.305 00:24:28.305 Dumping successful admin opcodes: 00:24:28.305 8, 9, 10, 24, 00:24:28.305 Dumping successful io opcodes: 00:24:28.305 0, 9, 00:24:28.305 NS: 0x200003aeff00 I/O qp, Total commands completed: 883314, total successful commands: 5137, random_seed: 2216291008 00:24:28.305 NS: 0x200003aeff00 admin qp, Total commands completed: 83789, total successful commands: 667, random_seed: 3079025344 00:24:28.305 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:28.305 Fuzzing completed. Shutting down the fuzz application 00:24:28.305 00:24:28.305 Dumping successful admin opcodes: 00:24:28.305 24, 00:24:28.305 Dumping successful io opcodes: 00:24:28.305 00:24:28.305 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1092902844 00:24:28.305 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1092967314 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.305 rmmod nvme_tcp 00:24:28.305 rmmod nvme_fabrics 00:24:28.305 rmmod nvme_keyring 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@515 -- # '[' -n 2111682 ']' 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # killprocess 2111682 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2111682 ']' 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 2111682 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2111682 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2111682' 00:24:28.305 killing process with pid 2111682 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 2111682 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 2111682 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-save 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-restore 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.305 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.211 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:30.211 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:30.211 00:24:30.211 real 0m40.204s 00:24:30.211 user 0m52.065s 00:24:30.211 sys 0m17.435s 00:24:30.211 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:30.211 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:30.211 ************************************ 00:24:30.211 END TEST nvmf_fuzz 00:24:30.211 ************************************ 00:24:30.211 11:19:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:30.211 11:19:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:30.211 11:19:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:30.211 11:19:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:30.211 ************************************ 00:24:30.211 START TEST nvmf_multiconnection 00:24:30.211 ************************************ 00:24:30.211 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:30.211 * Looking for test storage... 00:24:30.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:30.211 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:30.211 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:24:30.211 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:24:30.471 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:30.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.472 --rc genhtml_branch_coverage=1 00:24:30.472 --rc genhtml_function_coverage=1 00:24:30.472 --rc genhtml_legend=1 00:24:30.472 --rc geninfo_all_blocks=1 00:24:30.472 --rc geninfo_unexecuted_blocks=1 00:24:30.472 00:24:30.472 ' 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:30.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.472 --rc genhtml_branch_coverage=1 00:24:30.472 --rc genhtml_function_coverage=1 00:24:30.472 --rc genhtml_legend=1 00:24:30.472 --rc geninfo_all_blocks=1 00:24:30.472 --rc geninfo_unexecuted_blocks=1 00:24:30.472 00:24:30.472 ' 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:30.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.472 --rc genhtml_branch_coverage=1 00:24:30.472 --rc genhtml_function_coverage=1 00:24:30.472 --rc genhtml_legend=1 00:24:30.472 --rc geninfo_all_blocks=1 00:24:30.472 --rc geninfo_unexecuted_blocks=1 00:24:30.472 00:24:30.472 ' 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:30.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.472 --rc genhtml_branch_coverage=1 00:24:30.472 --rc genhtml_function_coverage=1 00:24:30.472 --rc genhtml_legend=1 00:24:30.472 --rc geninfo_all_blocks=1 00:24:30.472 --rc geninfo_unexecuted_blocks=1 00:24:30.472 00:24:30.472 ' 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:30.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:24:30.472 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:35.748 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:35.748 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:35.749 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:35.749 Found net devices under 0000:af:00.0: cvl_0_0 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:35.749 Found net devices under 0000:af:00.1: cvl_0_1 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # is_hw=yes 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:35.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:24:35.749 00:24:35.749 --- 10.0.0.2 ping statistics --- 00:24:35.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.749 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:24:35.749 00:24:35.749 --- 10.0.0.1 ping statistics --- 00:24:35.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.749 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # return 0 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:35.749 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:36.008 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.008 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # nvmfpid=2120209 00:24:36.009 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:36.009 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # waitforlisten 2120209 00:24:36.009 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 2120209 ']' 00:24:36.009 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.009 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:36.009 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.009 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:36.009 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.009 [2024-10-06 11:19:33.376077] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:24:36.009 [2024-10-06 11:19:33.376124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.009 [2024-10-06 11:19:33.436322] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:36.009 [2024-10-06 11:19:33.477690] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.009 [2024-10-06 11:19:33.477731] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.009 [2024-10-06 11:19:33.477739] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.009 [2024-10-06 11:19:33.477745] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.009 [2024-10-06 11:19:33.477750] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.009 [2024-10-06 11:19:33.479276] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.009 [2024-10-06 11:19:33.479378] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.009 [2024-10-06 11:19:33.479464] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:36.009 [2024-10-06 11:19:33.479465] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.009 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:36.009 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:24:36.009 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:36.009 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:36.009 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.269 [2024-10-06 11:19:33.622798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.269 Malloc1 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.269 [2024-10-06 11:19:33.678257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.269 Malloc2 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.269 Malloc3 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.269 Malloc4 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.269 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.270 Malloc5 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.270 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.530 Malloc6 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.530 Malloc7 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.530 Malloc8 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.530 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.530 Malloc9 00:24:36.530 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.530 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:36.530 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.530 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.530 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.531 Malloc10 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.531 Malloc11 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.531 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.791 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.791 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:36.791 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.791 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.791 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.791 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:36.791 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.791 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.791 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.791 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:36.791 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.791 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:37.729 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:37.729 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:37.729 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:37.729 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:37.729 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:40.265 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:40.265 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:40.265 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:24:40.265 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:40.265 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:40.265 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:40.265 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.265 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:41.203 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:41.203 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:41.203 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:41.203 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:41.203 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:43.109 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:43.109 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:43.109 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:24:43.109 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:43.109 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:43.109 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:43.109 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.109 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:44.489 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:44.489 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:44.489 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:44.489 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:44.489 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:46.391 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:46.391 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:46.391 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:24:46.391 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:46.391 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:46.391 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:46.391 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.391 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:47.769 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:47.769 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:47.769 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:47.769 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:47.769 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:49.675 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:49.675 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:49.675 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:24:49.675 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:49.675 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:49.675 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:49.675 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.675 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:51.053 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:51.053 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:51.053 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:51.053 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:51.053 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:52.960 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:52.961 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:24:52.961 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:52.961 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:52.961 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:52.961 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:52.961 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.961 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:54.342 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:54.342 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:54.342 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:54.342 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:54.342 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:56.250 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:56.250 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:56.250 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:24:56.250 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:56.250 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:56.250 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:56.250 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:56.250 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:57.630 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:57.630 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:57.630 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:57.630 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:57.630 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:59.539 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:59.539 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:59.539 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:24:59.539 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:59.539 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:59.539 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:59.539 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.539 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:01.441 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:01.441 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:01.441 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.441 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:01.441 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:03.345 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:03.346 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:03.346 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:03.346 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:03.346 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.346 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:03.346 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.346 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:04.722 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:04.722 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:04.722 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:04.722 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:04.722 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:06.629 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:06.629 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:06.629 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:06.629 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:06.629 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.629 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:06.629 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.629 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:08.008 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:08.008 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:08.008 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:08.008 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:08.008 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:09.917 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:09.917 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:09.917 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:09.917 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:09.918 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:09.918 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:09.918 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.918 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:11.307 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:11.307 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:11.307 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:11.307 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:11.307 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:13.215 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:13.215 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:13.215 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:13.215 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:13.215 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:13.215 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:13.215 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:13.215 [global] 00:25:13.215 thread=1 00:25:13.215 invalidate=1 00:25:13.215 rw=read 00:25:13.215 time_based=1 00:25:13.215 runtime=10 00:25:13.215 ioengine=libaio 00:25:13.215 direct=1 00:25:13.215 bs=262144 00:25:13.215 iodepth=64 00:25:13.215 norandommap=1 00:25:13.215 numjobs=1 00:25:13.215 00:25:13.215 [job0] 00:25:13.215 filename=/dev/nvme0n1 00:25:13.215 [job1] 00:25:13.215 filename=/dev/nvme10n1 00:25:13.215 [job2] 00:25:13.215 filename=/dev/nvme1n1 00:25:13.215 [job3] 00:25:13.215 filename=/dev/nvme2n1 00:25:13.215 [job4] 00:25:13.215 filename=/dev/nvme3n1 00:25:13.215 [job5] 00:25:13.215 filename=/dev/nvme4n1 00:25:13.215 [job6] 00:25:13.215 filename=/dev/nvme5n1 00:25:13.215 [job7] 00:25:13.215 filename=/dev/nvme6n1 00:25:13.215 [job8] 00:25:13.215 filename=/dev/nvme7n1 00:25:13.215 [job9] 00:25:13.215 filename=/dev/nvme8n1 00:25:13.215 [job10] 00:25:13.215 filename=/dev/nvme9n1 00:25:13.485 Could not set queue depth (nvme0n1) 00:25:13.485 Could not set queue depth (nvme10n1) 00:25:13.485 Could not set queue depth (nvme1n1) 00:25:13.485 Could not set queue depth (nvme2n1) 00:25:13.485 Could not set queue depth (nvme3n1) 00:25:13.485 Could not set queue depth (nvme4n1) 00:25:13.485 Could not set queue depth (nvme5n1) 00:25:13.485 Could not set queue depth (nvme6n1) 00:25:13.485 Could not set queue depth (nvme7n1) 00:25:13.485 Could not set queue depth (nvme8n1) 00:25:13.485 Could not set queue depth (nvme9n1) 00:25:13.742 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.742 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.742 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.742 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.742 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.742 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.742 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.742 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.742 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.742 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.742 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:13.742 fio-3.35 00:25:13.742 Starting 11 threads 00:25:25.977 00:25:25.977 job0: (groupid=0, jobs=1): err= 0: pid=2126645: Sun Oct 6 11:20:21 2024 00:25:25.977 read: IOPS=182, BW=45.7MiB/s (47.9MB/s)(462MiB/10125msec) 00:25:25.977 slat (usec): min=15, max=450401, avg=2835.81, stdev=20501.44 00:25:25.977 clat (msec): min=2, max=922, avg=347.20, stdev=245.39 00:25:25.977 lat (msec): min=2, max=922, avg=350.03, stdev=247.25 00:25:25.977 clat percentiles (msec): 00:25:25.977 | 1.00th=[ 9], 5.00th=[ 22], 10.00th=[ 28], 20.00th=[ 89], 00:25:25.977 | 30.00th=[ 120], 40.00th=[ 251], 50.00th=[ 330], 60.00th=[ 409], 00:25:25.977 | 70.00th=[ 493], 80.00th=[ 609], 90.00th=[ 684], 95.00th=[ 751], 00:25:25.977 | 99.00th=[ 827], 99.50th=[ 835], 99.90th=[ 927], 99.95th=[ 927], 00:25:25.977 | 99.99th=[ 927] 00:25:25.977 bw ( KiB/s): min=18944, max=164864, per=5.36%, avg=45696.00, stdev=33783.37, samples=20 00:25:25.977 iops : min= 74, max= 644, avg=178.50, stdev=131.97, samples=20 00:25:25.977 lat (msec) : 4=0.22%, 10=1.03%, 20=2.60%, 50=10.11%, 100=11.52% 00:25:25.977 lat (msec) : 250=14.39%, 500=30.39%, 750=23.85%, 1000=5.90% 00:25:25.977 cpu : usr=0.04%, sys=0.77%, ctx=359, majf=0, minf=4097 00:25:25.977 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:25:25.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.977 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:25.977 issued rwts: total=1849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.977 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:25.977 job1: (groupid=0, jobs=1): err= 0: pid=2126646: Sun Oct 6 11:20:21 2024 00:25:25.977 read: IOPS=323, BW=80.9MiB/s (84.8MB/s)(822MiB/10166msec) 00:25:25.977 slat (usec): min=22, max=236071, avg=1267.64, stdev=9613.19 00:25:25.977 clat (usec): min=857, max=803071, avg=196296.70, stdev=189777.50 00:25:25.977 lat (usec): min=887, max=982117, avg=197564.35, stdev=191347.79 00:25:25.977 clat percentiles (usec): 00:25:25.977 | 1.00th=[ 1713], 5.00th=[ 8979], 10.00th=[ 12518], 20.00th=[ 29492], 00:25:25.977 | 30.00th=[ 63177], 40.00th=[ 94897], 50.00th=[137364], 60.00th=[181404], 00:25:25.977 | 70.00th=[258999], 80.00th=[333448], 90.00th=[505414], 95.00th=[624952], 00:25:25.977 | 99.00th=[725615], 99.50th=[767558], 99.90th=[801113], 99.95th=[801113], 00:25:25.977 | 99.99th=[801113] 00:25:25.977 bw ( KiB/s): min=23552, max=213504, per=9.69%, avg=82565.10, stdev=52395.10, samples=20 00:25:25.977 iops : min= 92, max= 834, avg=322.50, stdev=204.68, samples=20 00:25:25.977 lat (usec) : 1000=0.27% 00:25:25.977 lat (msec) : 2=0.76%, 4=1.19%, 10=3.83%, 20=9.94%, 50=10.73% 00:25:25.977 lat (msec) : 100=13.77%, 250=28.85%, 500=20.52%, 750=9.33%, 1000=0.79% 00:25:25.977 cpu : usr=0.23%, sys=1.35%, ctx=1290, majf=0, minf=4097 00:25:25.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:25.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:25.977 issued rwts: total=3289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.977 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:25.977 job2: (groupid=0, jobs=1): err= 0: pid=2126647: Sun Oct 6 11:20:21 2024 00:25:25.977 read: IOPS=303, BW=75.8MiB/s (79.5MB/s)(771MiB/10168msec) 00:25:25.977 slat (usec): min=15, max=351266, avg=2268.40, stdev=13191.54 00:25:25.977 clat (usec): min=1727, max=825948, avg=208464.73, stdev=185015.41 00:25:25.977 lat (usec): min=1759, max=926369, avg=210733.14, stdev=186098.55 00:25:25.977 clat percentiles (msec): 00:25:25.977 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 28], 20.00th=[ 34], 00:25:25.977 | 30.00th=[ 92], 40.00th=[ 131], 50.00th=[ 174], 60.00th=[ 211], 00:25:25.977 | 70.00th=[ 245], 80.00th=[ 305], 90.00th=[ 510], 95.00th=[ 634], 00:25:25.977 | 99.00th=[ 735], 99.50th=[ 768], 99.90th=[ 810], 99.95th=[ 827], 00:25:25.977 | 99.99th=[ 827] 00:25:25.977 bw ( KiB/s): min=31744, max=300544, per=9.07%, avg=77312.00, stdev=58430.61, samples=20 00:25:25.977 iops : min= 124, max= 1174, avg=302.00, stdev=228.24, samples=20 00:25:25.977 lat (msec) : 2=0.03%, 4=0.29%, 10=4.09%, 20=3.40%, 50=17.19% 00:25:25.977 lat (msec) : 100=8.11%, 250=37.81%, 500=18.16%, 750=10.12%, 1000=0.81% 00:25:25.977 cpu : usr=0.11%, sys=1.35%, ctx=684, majf=0, minf=4097 00:25:25.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:25:25.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:25.977 issued rwts: total=3084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.977 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:25.977 job3: (groupid=0, jobs=1): err= 0: pid=2126648: Sun Oct 6 11:20:21 2024 00:25:25.977 read: IOPS=711, BW=178MiB/s (186MB/s)(1782MiB/10023msec) 00:25:25.977 slat (usec): min=10, max=262101, avg=1229.56, stdev=6858.35 00:25:25.977 clat (usec): min=756, max=954766, avg=88672.68, stdev=131231.27 00:25:25.977 lat (usec): min=787, max=954796, avg=89902.25, stdev=132590.72 00:25:25.977 clat percentiles (msec): 00:25:25.977 | 1.00th=[ 28], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 34], 00:25:25.977 | 30.00th=[ 35], 40.00th=[ 37], 50.00th=[ 39], 60.00th=[ 42], 00:25:25.977 | 70.00th=[ 60], 80.00th=[ 99], 90.00th=[ 215], 95.00th=[ 363], 00:25:25.977 | 99.00th=[ 793], 99.50th=[ 894], 99.90th=[ 927], 99.95th=[ 953], 00:25:25.977 | 99.99th=[ 953] 00:25:25.977 bw ( KiB/s): min=17920, max=463872, per=21.22%, avg=180868.60, stdev=162057.21, samples=20 00:25:25.977 iops : min= 70, max= 1812, avg=706.50, stdev=633.05, samples=20 00:25:25.977 lat (usec) : 1000=0.45% 00:25:25.977 lat (msec) : 2=0.01%, 4=0.01%, 10=0.06%, 20=0.01%, 50=65.49% 00:25:25.977 lat (msec) : 100=14.62%, 250=10.72%, 500=6.51%, 750=0.93%, 1000=1.19% 00:25:25.977 cpu : usr=0.28%, sys=2.71%, ctx=1159, majf=0, minf=4097 00:25:25.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:25.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:25.977 issued rwts: total=7128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.977 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:25.977 job4: (groupid=0, jobs=1): err= 0: pid=2126649: Sun Oct 6 11:20:21 2024 00:25:25.977 read: IOPS=253, BW=63.4MiB/s (66.5MB/s)(644MiB/10150msec) 00:25:25.977 slat (usec): min=15, max=274725, avg=2452.74, stdev=14378.69 00:25:25.977 clat (usec): min=769, max=864458, avg=249591.08, stdev=217044.40 00:25:25.977 lat (usec): min=800, max=886185, avg=252043.82, stdev=219239.70 00:25:25.978 clat percentiles (usec): 00:25:25.978 | 1.00th=[ 1385], 5.00th=[ 38536], 10.00th=[ 42206], 20.00th=[ 49546], 00:25:25.978 | 30.00th=[ 66847], 40.00th=[109577], 50.00th=[149947], 60.00th=[267387], 00:25:25.978 | 70.00th=[354419], 80.00th=[463471], 90.00th=[608175], 95.00th=[666895], 00:25:25.978 | 99.00th=[734004], 99.50th=[742392], 99.90th=[843056], 99.95th=[868221], 00:25:25.978 | 99.99th=[868221] 00:25:25.978 bw ( KiB/s): min=20480, max=250880, per=7.54%, avg=64256.00, stdev=60256.27, samples=20 00:25:25.978 iops : min= 80, max= 980, avg=251.00, stdev=235.38, samples=20 00:25:25.978 lat (usec) : 1000=0.89% 00:25:25.978 lat (msec) : 2=0.16%, 4=0.35%, 10=0.58%, 20=0.16%, 50=18.26% 00:25:25.978 lat (msec) : 100=17.91%, 250=20.63%, 500=23.78%, 750=16.82%, 1000=0.47% 00:25:25.978 cpu : usr=0.08%, sys=1.09%, ctx=544, majf=0, minf=4097 00:25:25.978 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:25:25.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:25.978 issued rwts: total=2574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.978 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:25.978 job5: (groupid=0, jobs=1): err= 0: pid=2126656: Sun Oct 6 11:20:21 2024 00:25:25.978 read: IOPS=298, BW=74.7MiB/s (78.3MB/s)(758MiB/10151msec) 00:25:25.978 slat (usec): min=17, max=505256, avg=2195.31, stdev=15383.67 00:25:25.978 clat (msec): min=2, max=984, avg=211.81, stdev=214.90 00:25:25.978 lat (msec): min=3, max=984, avg=214.00, stdev=217.48 00:25:25.978 clat percentiles (msec): 00:25:25.978 | 1.00th=[ 16], 5.00th=[ 31], 10.00th=[ 37], 20.00th=[ 44], 00:25:25.978 | 30.00th=[ 52], 40.00th=[ 71], 50.00th=[ 104], 60.00th=[ 161], 00:25:25.978 | 70.00th=[ 292], 80.00th=[ 397], 90.00th=[ 567], 95.00th=[ 651], 00:25:25.978 | 99.00th=[ 911], 99.50th=[ 927], 99.90th=[ 986], 99.95th=[ 986], 00:25:25.978 | 99.99th=[ 986] 00:25:25.978 bw ( KiB/s): min=13312, max=308736, per=8.91%, avg=75980.80, stdev=78068.69, samples=20 00:25:25.978 iops : min= 52, max= 1206, avg=296.80, stdev=304.96, samples=20 00:25:25.978 lat (msec) : 4=0.23%, 10=0.53%, 20=0.92%, 50=27.21%, 100=19.76% 00:25:25.978 lat (msec) : 250=17.71%, 500=19.49%, 750=12.14%, 1000=2.01% 00:25:25.978 cpu : usr=0.12%, sys=1.29%, ctx=656, majf=0, minf=4097 00:25:25.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:25:25.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:25.978 issued rwts: total=3032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.978 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:25.978 job6: (groupid=0, jobs=1): err= 0: pid=2126661: Sun Oct 6 11:20:21 2024 00:25:25.978 read: IOPS=227, BW=57.0MiB/s (59.7MB/s)(578MiB/10148msec) 00:25:25.978 slat (usec): min=16, max=493823, avg=3060.78, stdev=20817.59 00:25:25.978 clat (msec): min=4, max=926, avg=277.51, stdev=247.98 00:25:25.978 lat (msec): min=4, max=942, avg=280.57, stdev=250.46 00:25:25.978 clat percentiles (msec): 00:25:25.978 | 1.00th=[ 13], 5.00th=[ 23], 10.00th=[ 32], 20.00th=[ 48], 00:25:25.978 | 30.00th=[ 69], 40.00th=[ 122], 50.00th=[ 174], 60.00th=[ 296], 00:25:25.978 | 70.00th=[ 439], 80.00th=[ 527], 90.00th=[ 667], 95.00th=[ 760], 00:25:25.978 | 99.00th=[ 827], 99.50th=[ 835], 99.90th=[ 911], 99.95th=[ 927], 00:25:25.978 | 99.99th=[ 927] 00:25:25.978 bw ( KiB/s): min= 9216, max=221696, per=6.75%, avg=57548.80, stdev=56714.60, samples=20 00:25:25.978 iops : min= 36, max= 866, avg=224.80, stdev=221.54, samples=20 00:25:25.978 lat (msec) : 10=0.39%, 20=2.81%, 50=19.68%, 100=14.79%, 250=18.69% 00:25:25.978 lat (msec) : 500=21.63%, 750=16.13%, 1000=5.88% 00:25:25.978 cpu : usr=0.09%, sys=1.02%, ctx=505, majf=0, minf=4097 00:25:25.978 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:25:25.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:25.978 issued rwts: total=2312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.978 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:25.978 job7: (groupid=0, jobs=1): err= 0: pid=2126670: Sun Oct 6 11:20:21 2024 00:25:25.978 read: IOPS=171, BW=42.8MiB/s (44.9MB/s)(435MiB/10153msec) 00:25:25.978 slat (usec): min=9, max=215491, avg=2938.75, stdev=16507.96 00:25:25.978 clat (usec): min=1074, max=843806, avg=369907.28, stdev=231859.26 00:25:25.978 lat (usec): min=1129, max=860951, avg=372846.02, stdev=233285.18 00:25:25.978 clat percentiles (msec): 00:25:25.978 | 1.00th=[ 12], 5.00th=[ 16], 10.00th=[ 32], 20.00th=[ 102], 00:25:25.978 | 30.00th=[ 192], 40.00th=[ 317], 50.00th=[ 393], 60.00th=[ 477], 00:25:25.978 | 70.00th=[ 531], 80.00th=[ 584], 90.00th=[ 676], 95.00th=[ 718], 00:25:25.978 | 99.00th=[ 793], 99.50th=[ 827], 99.90th=[ 844], 99.95th=[ 844], 00:25:25.978 | 99.99th=[ 844] 00:25:25.978 bw ( KiB/s): min=18944, max=131072, per=5.03%, avg=42905.60, stdev=27185.83, samples=20 00:25:25.978 iops : min= 74, max= 512, avg=167.60, stdev=106.19, samples=20 00:25:25.978 lat (msec) : 2=0.17%, 10=0.57%, 20=6.09%, 50=6.32%, 100=6.78% 00:25:25.978 lat (msec) : 250=14.71%, 500=28.22%, 750=34.25%, 1000=2.87% 00:25:25.978 cpu : usr=0.09%, sys=0.71%, ctx=478, majf=0, minf=3722 00:25:25.978 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:25:25.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.978 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:25.978 issued rwts: total=1740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.978 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:25.978 job8: (groupid=0, jobs=1): err= 0: pid=2126711: Sun Oct 6 11:20:21 2024 00:25:25.978 read: IOPS=267, BW=66.8MiB/s (70.0MB/s)(679MiB/10161msec) 00:25:25.978 slat (usec): min=15, max=241801, avg=2654.57, stdev=13078.90 00:25:25.978 clat (usec): min=1702, max=775311, avg=236695.73, stdev=169250.92 00:25:25.978 lat (usec): min=1746, max=781059, avg=239350.31, stdev=170868.09 00:25:25.978 clat percentiles (msec): 00:25:25.978 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 36], 20.00th=[ 67], 00:25:25.978 | 30.00th=[ 140], 40.00th=[ 178], 50.00th=[ 215], 60.00th=[ 253], 00:25:25.978 | 70.00th=[ 300], 80.00th=[ 363], 90.00th=[ 477], 95.00th=[ 584], 00:25:25.978 | 99.00th=[ 701], 99.50th=[ 776], 99.90th=[ 776], 99.95th=[ 776], 00:25:25.978 | 99.99th=[ 776] 00:25:25.978 bw ( KiB/s): min=18432, max=156672, per=7.96%, avg=67845.55, stdev=39951.69, samples=20 00:25:25.978 iops : min= 72, max= 612, avg=265.00, stdev=156.07, samples=20 00:25:25.978 lat (msec) : 2=0.04%, 4=0.63%, 10=2.98%, 20=2.28%, 50=8.77% 00:25:25.978 lat (msec) : 100=10.17%, 250=34.56%, 500=31.72%, 750=8.25%, 1000=0.59% 00:25:25.978 cpu : usr=0.09%, sys=1.07%, ctx=648, majf=0, minf=4097 00:25:25.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:25:25.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:25.978 issued rwts: total=2714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.978 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:25.978 job9: (groupid=0, jobs=1): err= 0: pid=2126736: Sun Oct 6 11:20:21 2024 00:25:25.978 read: IOPS=356, BW=89.2MiB/s (93.6MB/s)(907MiB/10163msec) 00:25:25.978 slat (usec): min=9, max=497118, avg=1443.17, stdev=12927.54 00:25:25.978 clat (usec): min=1033, max=1153.0k, avg=177662.65, stdev=209652.08 00:25:25.978 lat (usec): min=1076, max=1153.0k, avg=179105.82, stdev=211045.91 00:25:25.978 clat percentiles (usec): 00:25:25.978 | 1.00th=[ 1401], 5.00th=[ 5276], 10.00th=[ 11469], 00:25:25.978 | 20.00th=[ 25560], 30.00th=[ 45351], 40.00th=[ 56886], 00:25:25.978 | 50.00th=[ 94897], 60.00th=[ 117965], 70.00th=[ 189793], 00:25:25.978 | 80.00th=[ 291505], 90.00th=[ 530580], 95.00th=[ 675283], 00:25:25.978 | 99.00th=[ 834667], 99.50th=[ 851444], 99.90th=[ 926942], 00:25:25.978 | 99.95th=[ 926942], 99.99th=[1149240] 00:25:25.978 bw ( KiB/s): min= 512, max=258560, per=10.70%, avg=91238.40, stdev=71344.61, samples=20 00:25:25.978 iops : min= 2, max= 1010, avg=356.40, stdev=278.69, samples=20 00:25:25.978 lat (msec) : 2=1.71%, 4=2.15%, 10=4.44%, 20=5.18%, 50=19.27% 00:25:25.978 lat (msec) : 100=19.74%, 250=22.52%, 500=14.25%, 750=6.84%, 1000=3.89% 00:25:25.978 lat (msec) : 2000=0.03% 00:25:25.978 cpu : usr=0.16%, sys=1.25%, ctx=944, majf=0, minf=4097 00:25:25.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:25:25.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:25.978 issued rwts: total=3628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.978 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:25.978 job10: (groupid=0, jobs=1): err= 0: pid=2126756: Sun Oct 6 11:20:21 2024 00:25:25.978 read: IOPS=246, BW=61.7MiB/s (64.7MB/s)(627MiB/10167msec) 00:25:25.978 slat (usec): min=9, max=1203.4k, avg=3083.89, stdev=27982.37 00:25:25.978 clat (msec): min=2, max=1261, avg=255.99, stdev=221.49 00:25:25.978 lat (msec): min=2, max=1550, avg=259.07, stdev=223.49 00:25:25.978 clat percentiles (msec): 00:25:25.978 | 1.00th=[ 13], 5.00th=[ 22], 10.00th=[ 56], 20.00th=[ 96], 00:25:25.978 | 30.00th=[ 136], 40.00th=[ 157], 50.00th=[ 184], 60.00th=[ 247], 00:25:25.978 | 70.00th=[ 326], 80.00th=[ 388], 90.00th=[ 472], 95.00th=[ 584], 00:25:25.978 | 99.00th=[ 1250], 99.50th=[ 1267], 99.90th=[ 1267], 99.95th=[ 1267], 00:25:25.978 | 99.99th=[ 1267] 00:25:25.978 bw ( KiB/s): min=17408, max=141824, per=7.73%, avg=65886.32, stdev=37055.52, samples=19 00:25:25.978 iops : min= 68, max= 554, avg=257.37, stdev=144.75, samples=19 00:25:25.978 lat (msec) : 4=0.08%, 10=0.56%, 20=3.91%, 50=4.55%, 100=11.88% 00:25:25.978 lat (msec) : 250=40.27%, 500=30.06%, 750=6.14%, 1000=0.04%, 2000=2.51% 00:25:25.978 cpu : usr=0.13%, sys=1.02%, ctx=449, majf=0, minf=4097 00:25:25.978 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:25:25.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:25.979 issued rwts: total=2508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.979 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:25.979 00:25:25.979 Run status group 0 (all jobs): 00:25:25.979 READ: bw=832MiB/s (873MB/s), 42.8MiB/s-178MiB/s (44.9MB/s-186MB/s), io=8465MiB (8876MB), run=10023-10168msec 00:25:25.979 00:25:25.979 Disk stats (read/write): 00:25:25.979 nvme0n1: ios=3566/0, merge=0/0, ticks=1209603/0, in_queue=1209603, util=94.59% 00:25:25.979 nvme10n1: ios=6523/0, merge=0/0, ticks=1260686/0, in_queue=1260686, util=95.12% 00:25:25.979 nvme1n1: ios=6103/0, merge=0/0, ticks=1254401/0, in_queue=1254401, util=95.70% 00:25:25.979 nvme2n1: ios=13763/0, merge=0/0, ticks=1236909/0, in_queue=1236909, util=95.92% 00:25:25.979 nvme3n1: ios=5122/0, merge=0/0, ticks=1262742/0, in_queue=1262742, util=96.25% 00:25:25.979 nvme4n1: ios=6023/0, merge=0/0, ticks=1258796/0, in_queue=1258796, util=97.01% 00:25:25.979 nvme5n1: ios=4466/0, merge=0/0, ticks=1210754/0, in_queue=1210754, util=97.28% 00:25:25.979 nvme6n1: ios=3449/0, merge=0/0, ticks=1267112/0, in_queue=1267112, util=97.61% 00:25:25.979 nvme7n1: ios=5388/0, merge=0/0, ticks=1261810/0, in_queue=1261810, util=98.53% 00:25:25.979 nvme8n1: ios=7173/0, merge=0/0, ticks=1249109/0, in_queue=1249109, util=98.94% 00:25:25.979 nvme9n1: ios=4932/0, merge=0/0, ticks=1240977/0, in_queue=1240977, util=99.27% 00:25:25.979 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:25.979 [global] 00:25:25.979 thread=1 00:25:25.979 invalidate=1 00:25:25.979 rw=randwrite 00:25:25.979 time_based=1 00:25:25.979 runtime=10 00:25:25.979 ioengine=libaio 00:25:25.979 direct=1 00:25:25.979 bs=262144 00:25:25.979 iodepth=64 00:25:25.979 norandommap=1 00:25:25.979 numjobs=1 00:25:25.979 00:25:25.979 [job0] 00:25:25.979 filename=/dev/nvme0n1 00:25:25.979 [job1] 00:25:25.979 filename=/dev/nvme10n1 00:25:25.979 [job2] 00:25:25.979 filename=/dev/nvme1n1 00:25:25.979 [job3] 00:25:25.979 filename=/dev/nvme2n1 00:25:25.979 [job4] 00:25:25.979 filename=/dev/nvme3n1 00:25:25.979 [job5] 00:25:25.979 filename=/dev/nvme4n1 00:25:25.979 [job6] 00:25:25.979 filename=/dev/nvme5n1 00:25:25.979 [job7] 00:25:25.979 filename=/dev/nvme6n1 00:25:25.979 [job8] 00:25:25.979 filename=/dev/nvme7n1 00:25:25.979 [job9] 00:25:25.979 filename=/dev/nvme8n1 00:25:25.979 [job10] 00:25:25.979 filename=/dev/nvme9n1 00:25:25.979 Could not set queue depth (nvme0n1) 00:25:25.979 Could not set queue depth (nvme10n1) 00:25:25.979 Could not set queue depth (nvme1n1) 00:25:25.979 Could not set queue depth (nvme2n1) 00:25:25.979 Could not set queue depth (nvme3n1) 00:25:25.979 Could not set queue depth (nvme4n1) 00:25:25.979 Could not set queue depth (nvme5n1) 00:25:25.979 Could not set queue depth (nvme6n1) 00:25:25.979 Could not set queue depth (nvme7n1) 00:25:25.979 Could not set queue depth (nvme8n1) 00:25:25.979 Could not set queue depth (nvme9n1) 00:25:25.979 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.979 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.979 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.979 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.979 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.979 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.979 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.979 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.979 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.979 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.979 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:25.979 fio-3.35 00:25:25.979 Starting 11 threads 00:25:35.949 00:25:35.949 job0: (groupid=0, jobs=1): err= 0: pid=2127929: Sun Oct 6 11:20:32 2024 00:25:35.949 write: IOPS=341, BW=85.3MiB/s (89.5MB/s)(869MiB/10183msec); 0 zone resets 00:25:35.949 slat (usec): min=21, max=43726, avg=2039.57, stdev=6026.35 00:25:35.949 clat (usec): min=1502, max=578915, avg=185428.14, stdev=135015.58 00:25:35.949 lat (usec): min=1559, max=578958, avg=187467.71, stdev=136727.03 00:25:35.949 clat percentiles (msec): 00:25:35.949 | 1.00th=[ 7], 5.00th=[ 20], 10.00th=[ 36], 20.00th=[ 68], 00:25:35.949 | 30.00th=[ 86], 40.00th=[ 114], 50.00th=[ 138], 60.00th=[ 184], 00:25:35.949 | 70.00th=[ 279], 80.00th=[ 347], 90.00th=[ 376], 95.00th=[ 401], 00:25:35.949 | 99.00th=[ 498], 99.50th=[ 510], 99.90th=[ 550], 99.95th=[ 584], 00:25:35.949 | 99.99th=[ 584] 00:25:35.949 bw ( KiB/s): min=32768, max=205824, per=8.23%, avg=87321.60, stdev=55931.22, samples=20 00:25:35.949 iops : min= 128, max= 804, avg=341.10, stdev=218.48, samples=20 00:25:35.949 lat (msec) : 2=0.09%, 4=0.49%, 10=1.67%, 20=2.82%, 50=12.52% 00:25:35.949 lat (msec) : 100=19.08%, 250=30.22%, 500=32.23%, 750=0.89% 00:25:35.949 cpu : usr=0.72%, sys=1.21%, ctx=1912, majf=0, minf=1 00:25:35.949 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:25:35.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.949 issued rwts: total=0,3475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.949 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.949 job1: (groupid=0, jobs=1): err= 0: pid=2127943: Sun Oct 6 11:20:32 2024 00:25:35.949 write: IOPS=397, BW=99.3MiB/s (104MB/s)(999MiB/10064msec); 0 zone resets 00:25:35.949 slat (usec): min=21, max=98997, avg=2015.50, stdev=5564.66 00:25:35.949 clat (usec): min=1203, max=540525, avg=159087.56, stdev=107501.62 00:25:35.949 lat (usec): min=1263, max=540586, avg=161103.07, stdev=108918.53 00:25:35.949 clat percentiles (msec): 00:25:35.949 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 51], 20.00th=[ 64], 00:25:35.949 | 30.00th=[ 86], 40.00th=[ 121], 50.00th=[ 136], 60.00th=[ 163], 00:25:35.949 | 70.00th=[ 199], 80.00th=[ 232], 90.00th=[ 326], 95.00th=[ 376], 00:25:35.949 | 99.00th=[ 502], 99.50th=[ 523], 99.90th=[ 542], 99.95th=[ 542], 00:25:35.949 | 99.99th=[ 542] 00:25:35.949 bw ( KiB/s): min=30720, max=250880, per=9.50%, avg=100724.35, stdev=56548.66, samples=20 00:25:35.949 iops : min= 120, max= 980, avg=393.45, stdev=220.89, samples=20 00:25:35.949 lat (msec) : 2=0.20%, 4=0.48%, 10=1.33%, 20=1.95%, 50=6.00% 00:25:35.949 lat (msec) : 100=24.34%, 250=50.41%, 500=14.26%, 750=1.03% 00:25:35.949 cpu : usr=0.87%, sys=1.21%, ctx=1867, majf=0, minf=1 00:25:35.949 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:35.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.949 issued rwts: total=0,3997,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.949 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.949 job2: (groupid=0, jobs=1): err= 0: pid=2127944: Sun Oct 6 11:20:32 2024 00:25:35.949 write: IOPS=494, BW=124MiB/s (130MB/s)(1249MiB/10110msec); 0 zone resets 00:25:35.949 slat (usec): min=30, max=69780, avg=1642.45, stdev=4401.28 00:25:35.949 clat (usec): min=1480, max=397865, avg=127537.33, stdev=88352.64 00:25:35.949 lat (usec): min=1540, max=397910, avg=129179.78, stdev=89464.55 00:25:35.949 clat percentiles (msec): 00:25:35.949 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 25], 20.00th=[ 58], 00:25:35.949 | 30.00th=[ 63], 40.00th=[ 95], 50.00th=[ 111], 60.00th=[ 122], 00:25:35.949 | 70.00th=[ 161], 80.00th=[ 209], 90.00th=[ 257], 95.00th=[ 309], 00:25:35.949 | 99.00th=[ 372], 99.50th=[ 388], 99.90th=[ 397], 99.95th=[ 397], 00:25:35.949 | 99.99th=[ 397] 00:25:35.949 bw ( KiB/s): min=45056, max=276480, per=11.91%, avg=126299.85, stdev=65966.64, samples=20 00:25:35.949 iops : min= 176, max= 1080, avg=493.35, stdev=257.68, samples=20 00:25:35.949 lat (msec) : 2=0.10%, 4=0.26%, 10=3.90%, 20=4.30%, 50=7.89% 00:25:35.949 lat (msec) : 100=25.60%, 250=47.16%, 500=10.79% 00:25:35.949 cpu : usr=1.43%, sys=1.64%, ctx=2258, majf=0, minf=1 00:25:35.949 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:35.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.949 issued rwts: total=0,4996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.949 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.949 job3: (groupid=0, jobs=1): err= 0: pid=2127945: Sun Oct 6 11:20:32 2024 00:25:35.949 write: IOPS=315, BW=78.9MiB/s (82.7MB/s)(793MiB/10052msec); 0 zone resets 00:25:35.949 slat (usec): min=24, max=128760, avg=1905.53, stdev=7078.16 00:25:35.949 clat (usec): min=1449, max=607946, avg=200901.50, stdev=157792.91 00:25:35.949 lat (usec): min=1489, max=608005, avg=202807.03, stdev=159679.91 00:25:35.949 clat percentiles (msec): 00:25:35.949 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 12], 20.00th=[ 25], 00:25:35.949 | 30.00th=[ 52], 40.00th=[ 121], 50.00th=[ 197], 60.00th=[ 288], 00:25:35.949 | 70.00th=[ 326], 80.00th=[ 351], 90.00th=[ 393], 95.00th=[ 443], 00:25:35.949 | 99.00th=[ 567], 99.50th=[ 575], 99.90th=[ 600], 99.95th=[ 609], 00:25:35.950 | 99.99th=[ 609] 00:25:35.950 bw ( KiB/s): min=26624, max=308224, per=7.50%, avg=79570.40, stdev=59389.33, samples=20 00:25:35.950 iops : min= 104, max= 1204, avg=310.80, stdev=232.00, samples=20 00:25:35.950 lat (msec) : 2=0.09%, 4=0.63%, 10=7.85%, 20=8.51%, 50=12.30% 00:25:35.950 lat (msec) : 100=7.69%, 250=18.61%, 500=42.10%, 750=2.21% 00:25:35.950 cpu : usr=0.73%, sys=1.13%, ctx=2258, majf=0, minf=1 00:25:35.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:25:35.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.950 issued rwts: total=0,3171,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.950 job4: (groupid=0, jobs=1): err= 0: pid=2127946: Sun Oct 6 11:20:32 2024 00:25:35.950 write: IOPS=297, BW=74.5MiB/s (78.1MB/s)(758MiB/10181msec); 0 zone resets 00:25:35.950 slat (usec): min=17, max=120382, avg=2415.74, stdev=7510.56 00:25:35.950 clat (usec): min=1283, max=568421, avg=212365.38, stdev=141177.60 00:25:35.950 lat (usec): min=1375, max=568464, avg=214781.13, stdev=143029.92 00:25:35.950 clat percentiles (msec): 00:25:35.950 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 37], 20.00th=[ 80], 00:25:35.950 | 30.00th=[ 113], 40.00th=[ 131], 50.00th=[ 182], 60.00th=[ 257], 00:25:35.950 | 70.00th=[ 334], 80.00th=[ 363], 90.00th=[ 388], 95.00th=[ 447], 00:25:35.950 | 99.00th=[ 493], 99.50th=[ 506], 99.90th=[ 542], 99.95th=[ 567], 00:25:35.950 | 99.99th=[ 567] 00:25:35.950 bw ( KiB/s): min=34816, max=141312, per=7.16%, avg=75991.15, stdev=37014.21, samples=20 00:25:35.950 iops : min= 136, max= 552, avg=296.80, stdev=144.56, samples=20 00:25:35.950 lat (msec) : 2=0.20%, 4=2.18%, 10=2.44%, 20=3.23%, 50=6.60% 00:25:35.950 lat (msec) : 100=11.68%, 250=33.08%, 500=39.81%, 750=0.79% 00:25:35.950 cpu : usr=0.74%, sys=0.94%, ctx=1728, majf=0, minf=2 00:25:35.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:25:35.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.950 issued rwts: total=0,3032,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.950 job5: (groupid=0, jobs=1): err= 0: pid=2127947: Sun Oct 6 11:20:32 2024 00:25:35.950 write: IOPS=273, BW=68.5MiB/s (71.8MB/s)(697MiB/10182msec); 0 zone resets 00:25:35.950 slat (usec): min=25, max=73914, avg=2629.58, stdev=7237.42 00:25:35.950 clat (usec): min=1058, max=578932, avg=230902.50, stdev=136205.83 00:25:35.950 lat (usec): min=1087, max=578973, avg=233532.08, stdev=138164.11 00:25:35.950 clat percentiles (msec): 00:25:35.950 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 41], 20.00th=[ 100], 00:25:35.950 | 30.00th=[ 130], 40.00th=[ 169], 50.00th=[ 236], 60.00th=[ 309], 00:25:35.950 | 70.00th=[ 342], 80.00th=[ 359], 90.00th=[ 393], 95.00th=[ 422], 00:25:35.950 | 99.00th=[ 498], 99.50th=[ 506], 99.90th=[ 550], 99.95th=[ 584], 00:25:35.950 | 99.99th=[ 584] 00:25:35.950 bw ( KiB/s): min=34816, max=182272, per=6.58%, avg=69760.00, stdev=40182.90, samples=20 00:25:35.950 iops : min= 136, max= 712, avg=272.50, stdev=156.96, samples=20 00:25:35.950 lat (msec) : 2=0.97%, 4=2.62%, 10=2.80%, 20=1.33%, 50=4.02% 00:25:35.950 lat (msec) : 100=8.35%, 250=31.73%, 500=47.51%, 750=0.68% 00:25:35.950 cpu : usr=0.66%, sys=0.90%, ctx=1553, majf=0, minf=1 00:25:35.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.7% 00:25:35.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.950 issued rwts: total=0,2789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.950 job6: (groupid=0, jobs=1): err= 0: pid=2127948: Sun Oct 6 11:20:32 2024 00:25:35.950 write: IOPS=332, BW=83.2MiB/s (87.3MB/s)(847MiB/10180msec); 0 zone resets 00:25:35.950 slat (usec): min=28, max=121958, avg=2066.81, stdev=6300.71 00:25:35.950 clat (usec): min=984, max=555169, avg=190027.06, stdev=118110.07 00:25:35.950 lat (usec): min=1024, max=555219, avg=192093.88, stdev=119860.76 00:25:35.950 clat percentiles (msec): 00:25:35.950 | 1.00th=[ 5], 5.00th=[ 25], 10.00th=[ 49], 20.00th=[ 83], 00:25:35.950 | 30.00th=[ 112], 40.00th=[ 138], 50.00th=[ 174], 60.00th=[ 197], 00:25:35.950 | 70.00th=[ 241], 80.00th=[ 326], 90.00th=[ 359], 95.00th=[ 393], 00:25:35.950 | 99.00th=[ 477], 99.50th=[ 502], 99.90th=[ 550], 99.95th=[ 550], 00:25:35.950 | 99.99th=[ 558] 00:25:35.950 bw ( KiB/s): min=41984, max=167936, per=8.03%, avg=85150.35, stdev=37592.43, samples=20 00:25:35.950 iops : min= 164, max= 656, avg=332.60, stdev=146.86, samples=20 00:25:35.950 lat (usec) : 1000=0.03% 00:25:35.950 lat (msec) : 2=0.24%, 4=0.56%, 10=0.83%, 20=2.15%, 50=6.34% 00:25:35.950 lat (msec) : 100=13.72%, 250=47.51%, 500=28.18%, 750=0.44% 00:25:35.950 cpu : usr=0.76%, sys=1.11%, ctx=1966, majf=0, minf=1 00:25:35.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:25:35.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.950 issued rwts: total=0,3389,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.950 job7: (groupid=0, jobs=1): err= 0: pid=2127949: Sun Oct 6 11:20:32 2024 00:25:35.950 write: IOPS=611, BW=153MiB/s (160MB/s)(1545MiB/10110msec); 0 zone resets 00:25:35.950 slat (usec): min=28, max=86236, avg=1242.67, stdev=3853.00 00:25:35.950 clat (msec): min=2, max=460, avg=103.40, stdev=95.70 00:25:35.950 lat (msec): min=2, max=460, avg=104.64, stdev=96.66 00:25:35.950 clat percentiles (msec): 00:25:35.950 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 28], 20.00th=[ 42], 00:25:35.950 | 30.00th=[ 43], 40.00th=[ 50], 50.00th=[ 54], 60.00th=[ 72], 00:25:35.950 | 70.00th=[ 124], 80.00th=[ 174], 90.00th=[ 251], 95.00th=[ 321], 00:25:35.950 | 99.00th=[ 401], 99.50th=[ 414], 99.90th=[ 430], 99.95th=[ 443], 00:25:35.950 | 99.99th=[ 460] 00:25:35.950 bw ( KiB/s): min=43520, max=392192, per=14.76%, avg=156595.20, stdev=101949.70, samples=20 00:25:35.950 iops : min= 170, max= 1532, avg=611.70, stdev=398.24, samples=20 00:25:35.950 lat (msec) : 4=0.16%, 10=1.96%, 20=4.21%, 50=35.45%, 100=22.82% 00:25:35.950 lat (msec) : 250=25.32%, 500=10.08% 00:25:35.950 cpu : usr=1.35%, sys=1.89%, ctx=2723, majf=0, minf=1 00:25:35.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:35.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.950 issued rwts: total=0,6180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.950 job8: (groupid=0, jobs=1): err= 0: pid=2127950: Sun Oct 6 11:20:32 2024 00:25:35.950 write: IOPS=312, BW=78.0MiB/s (81.8MB/s)(794MiB/10179msec); 0 zone resets 00:25:35.950 slat (usec): min=26, max=141233, avg=2342.27, stdev=7280.68 00:25:35.950 clat (usec): min=1109, max=614531, avg=202559.11, stdev=142431.96 00:25:35.950 lat (usec): min=1143, max=614597, avg=204901.38, stdev=144034.18 00:25:35.950 clat percentiles (msec): 00:25:35.950 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 19], 20.00th=[ 61], 00:25:35.950 | 30.00th=[ 85], 40.00th=[ 153], 50.00th=[ 186], 60.00th=[ 236], 00:25:35.950 | 70.00th=[ 288], 80.00th=[ 347], 90.00th=[ 405], 95.00th=[ 435], 00:25:35.950 | 99.00th=[ 514], 99.50th=[ 550], 99.90th=[ 592], 99.95th=[ 600], 00:25:35.950 | 99.99th=[ 617] 00:25:35.950 bw ( KiB/s): min=36864, max=195072, per=7.52%, avg=79722.10, stdev=45706.66, samples=20 00:25:35.950 iops : min= 144, max= 762, avg=311.40, stdev=178.56, samples=20 00:25:35.950 lat (msec) : 2=0.50%, 4=3.31%, 10=3.34%, 20=3.46%, 50=6.83% 00:25:35.950 lat (msec) : 100=14.04%, 250=31.76%, 500=35.06%, 750=1.70% 00:25:35.950 cpu : usr=0.62%, sys=1.09%, ctx=1797, majf=0, minf=1 00:25:35.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:25:35.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.950 issued rwts: total=0,3177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.950 job9: (groupid=0, jobs=1): err= 0: pid=2127951: Sun Oct 6 11:20:32 2024 00:25:35.950 write: IOPS=202, BW=50.7MiB/s (53.1MB/s)(516MiB/10180msec); 0 zone resets 00:25:35.950 slat (usec): min=25, max=258248, avg=4375.48, stdev=13227.50 00:25:35.950 clat (msec): min=9, max=591, avg=311.08, stdev=104.58 00:25:35.950 lat (msec): min=9, max=591, avg=315.46, stdev=106.25 00:25:35.950 clat percentiles (msec): 00:25:35.950 | 1.00th=[ 39], 5.00th=[ 127], 10.00th=[ 161], 20.00th=[ 207], 00:25:35.950 | 30.00th=[ 275], 40.00th=[ 317], 50.00th=[ 338], 60.00th=[ 351], 00:25:35.950 | 70.00th=[ 372], 80.00th=[ 388], 90.00th=[ 422], 95.00th=[ 464], 00:25:35.950 | 99.00th=[ 514], 99.50th=[ 531], 99.90th=[ 592], 99.95th=[ 592], 00:25:35.950 | 99.99th=[ 592] 00:25:35.950 bw ( KiB/s): min=32768, max=89600, per=4.83%, avg=51232.05, stdev=15245.54, samples=20 00:25:35.950 iops : min= 128, max= 350, avg=200.10, stdev=59.53, samples=20 00:25:35.950 lat (msec) : 10=0.05%, 20=0.53%, 50=1.02%, 100=1.50%, 250=23.93% 00:25:35.950 lat (msec) : 500=70.93%, 750=2.03% 00:25:35.950 cpu : usr=0.54%, sys=0.77%, ctx=690, majf=0, minf=1 00:25:35.950 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:25:35.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.950 issued rwts: total=0,2064,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.950 job10: (groupid=0, jobs=1): err= 0: pid=2127952: Sun Oct 6 11:20:32 2024 00:25:35.950 write: IOPS=587, BW=147MiB/s (154MB/s)(1481MiB/10086msec); 0 zone resets 00:25:35.950 slat (usec): min=28, max=44593, avg=1355.38, stdev=3823.83 00:25:35.950 clat (usec): min=1939, max=476690, avg=107581.36, stdev=88818.73 00:25:35.950 lat (usec): min=1979, max=481797, avg=108936.75, stdev=89946.58 00:25:35.950 clat percentiles (msec): 00:25:35.950 | 1.00th=[ 13], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 40], 00:25:35.950 | 30.00th=[ 42], 40.00th=[ 45], 50.00th=[ 72], 60.00th=[ 86], 00:25:35.950 | 70.00th=[ 144], 80.00th=[ 186], 90.00th=[ 232], 95.00th=[ 268], 00:25:35.951 | 99.00th=[ 418], 99.50th=[ 456], 99.90th=[ 477], 99.95th=[ 477], 00:25:35.951 | 99.99th=[ 477] 00:25:35.951 bw ( KiB/s): min=38912, max=423936, per=14.14%, avg=149995.30, stdev=114799.32, samples=20 00:25:35.951 iops : min= 152, max= 1656, avg=585.90, stdev=448.45, samples=20 00:25:35.951 lat (msec) : 2=0.02%, 4=0.05%, 10=0.64%, 20=0.96%, 50=42.11% 00:25:35.951 lat (msec) : 100=18.15%, 250=31.41%, 500=6.65% 00:25:35.951 cpu : usr=1.24%, sys=1.73%, ctx=2393, majf=0, minf=1 00:25:35.951 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:35.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.951 issued rwts: total=0,5922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.951 00:25:35.951 Run status group 0 (all jobs): 00:25:35.951 WRITE: bw=1036MiB/s (1086MB/s), 50.7MiB/s-153MiB/s (53.1MB/s-160MB/s), io=10.3GiB (11.1GB), run=10052-10183msec 00:25:35.951 00:25:35.951 Disk stats (read/write): 00:25:35.951 nvme0n1: ios=49/6940, merge=0/0, ticks=39/1248721, in_queue=1248760, util=97.42% 00:25:35.951 nvme10n1: ios=46/7743, merge=0/0, ticks=228/1219241, in_queue=1219469, util=98.03% 00:25:35.951 nvme1n1: ios=45/9827, merge=0/0, ticks=2520/1195609, in_queue=1198129, util=100.00% 00:25:35.951 nvme2n1: ios=0/6070, merge=0/0, ticks=0/1225743, in_queue=1225743, util=97.72% 00:25:35.951 nvme3n1: ios=48/6052, merge=0/0, ticks=1030/1247461, in_queue=1248491, util=100.00% 00:25:35.951 nvme4n1: ios=0/5568, merge=0/0, ticks=0/1247003, in_queue=1247003, util=98.17% 00:25:35.951 nvme5n1: ios=44/6772, merge=0/0, ticks=1988/1242924, in_queue=1244912, util=100.00% 00:25:35.951 nvme6n1: ios=42/12189, merge=0/0, ticks=988/1209819, in_queue=1210807, util=100.00% 00:25:35.951 nvme7n1: ios=42/6348, merge=0/0, ticks=839/1246375, in_queue=1247214, util=100.00% 00:25:35.951 nvme8n1: ios=45/4122, merge=0/0, ticks=2254/1177178, in_queue=1179432, util=100.00% 00:25:35.951 nvme9n1: ios=38/11626, merge=0/0, ticks=584/1209639, in_queue=1210223, util=100.00% 00:25:35.951 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:35.951 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:35.951 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.951 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:35.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:35.951 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:35.951 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:35.951 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:35.951 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:25:35.951 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:35.951 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:25:35.951 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:35.951 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:35.951 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.951 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.951 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.951 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.951 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:36.208 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:36.208 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:36.208 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:36.208 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:36.208 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:25:36.209 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:25:36.209 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:36.209 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:36.209 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:36.209 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.209 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:36.209 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.209 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:36.209 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:36.466 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:36.466 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:36.466 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:36.466 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:36.466 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:25:36.466 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:25:36.466 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:36.466 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:36.466 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:36.466 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.466 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:36.466 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.466 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:36.467 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:37.032 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:37.032 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:37.032 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:25:37.290 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:37.290 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:37.290 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.290 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.290 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.290 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.290 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:37.290 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:37.290 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:37.290 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:37.290 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:37.290 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:25:37.290 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:25:37.290 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:37.290 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:37.290 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:37.290 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.290 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.547 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.547 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.547 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:37.547 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:37.547 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:37.547 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:37.547 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:37.547 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:25:37.547 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:37.547 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:25:37.547 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:37.547 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:37.547 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.547 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.547 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.547 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.547 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:37.805 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:37.805 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:37.805 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:37.805 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:37.805 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:25:37.805 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:37.805 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:25:37.805 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:37.805 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:37.805 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.805 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.805 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.805 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.805 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:38.062 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:38.062 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:38.062 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:38.062 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:25:38.062 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:38.062 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:25:38.062 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:38.062 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:38.062 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:38.062 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.062 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.062 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.062 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:38.062 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:38.062 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:38.062 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:38.062 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:38.063 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:25:38.063 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:38.063 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:38.320 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:38.320 rmmod nvme_tcp 00:25:38.320 rmmod nvme_fabrics 00:25:38.320 rmmod nvme_keyring 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@515 -- # '[' -n 2120209 ']' 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # killprocess 2120209 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 2120209 ']' 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 2120209 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:38.320 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2120209 00:25:38.578 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:38.578 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:38.578 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2120209' 00:25:38.578 killing process with pid 2120209 00:25:38.578 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 2120209 00:25:38.578 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 2120209 00:25:38.836 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:38.836 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:38.836 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:38.836 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:25:38.836 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-save 00:25:38.836 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-restore 00:25:38.836 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:38.836 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:38.836 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:38.836 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.836 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:38.836 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:41.368 00:25:41.368 real 1m10.792s 00:25:41.368 user 4m17.293s 00:25:41.368 sys 0m17.171s 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:41.368 ************************************ 00:25:41.368 END TEST nvmf_multiconnection 00:25:41.368 ************************************ 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:41.368 ************************************ 00:25:41.368 START TEST nvmf_initiator_timeout 00:25:41.368 ************************************ 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:41.368 * Looking for test storage... 00:25:41.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:41.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.368 --rc genhtml_branch_coverage=1 00:25:41.368 --rc genhtml_function_coverage=1 00:25:41.368 --rc genhtml_legend=1 00:25:41.368 --rc geninfo_all_blocks=1 00:25:41.368 --rc geninfo_unexecuted_blocks=1 00:25:41.368 00:25:41.368 ' 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:41.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.368 --rc genhtml_branch_coverage=1 00:25:41.368 --rc genhtml_function_coverage=1 00:25:41.368 --rc genhtml_legend=1 00:25:41.368 --rc geninfo_all_blocks=1 00:25:41.368 --rc geninfo_unexecuted_blocks=1 00:25:41.368 00:25:41.368 ' 00:25:41.368 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:41.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.368 --rc genhtml_branch_coverage=1 00:25:41.368 --rc genhtml_function_coverage=1 00:25:41.368 --rc genhtml_legend=1 00:25:41.368 --rc geninfo_all_blocks=1 00:25:41.369 --rc geninfo_unexecuted_blocks=1 00:25:41.369 00:25:41.369 ' 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:41.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.369 --rc genhtml_branch_coverage=1 00:25:41.369 --rc genhtml_function_coverage=1 00:25:41.369 --rc genhtml_legend=1 00:25:41.369 --rc geninfo_all_blocks=1 00:25:41.369 --rc geninfo_unexecuted_blocks=1 00:25:41.369 00:25:41.369 ' 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:41.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:25:41.369 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:46.634 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:46.634 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:46.634 Found net devices under 0000:af:00.0: cvl_0_0 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:46.634 Found net devices under 0000:af:00.1: cvl_0_1 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # is_hw=yes 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.634 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:46.635 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:46.635 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:46.635 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:46.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:25:46.635 00:25:46.635 --- 10.0.0.2 ping statistics --- 00:25:46.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.635 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:25:46.635 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:46.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:25:46.635 00:25:46.635 --- 10.0.0.1 ping statistics --- 00:25:46.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.635 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:25:46.635 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.635 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # return 0 00:25:46.635 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:46.635 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.635 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:46.635 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:46.635 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.635 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:46.635 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:46.635 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:46.635 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:46.635 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:46.635 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.635 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # nvmfpid=2133043 00:25:46.635 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # waitforlisten 2133043 00:25:46.635 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 2133043 ']' 00:25:46.635 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.635 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:46.635 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:46.635 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.635 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:46.635 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.635 [2024-10-06 11:20:44.075390] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:25:46.635 [2024-10-06 11:20:44.075434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.635 [2024-10-06 11:20:44.134673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:46.635 [2024-10-06 11:20:44.174087] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.635 [2024-10-06 11:20:44.174128] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.635 [2024-10-06 11:20:44.174135] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.635 [2024-10-06 11:20:44.174141] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.635 [2024-10-06 11:20:44.174146] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.635 [2024-10-06 11:20:44.175672] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.635 [2024-10-06 11:20:44.175768] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.635 [2024-10-06 11:20:44.175859] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:25:46.635 [2024-10-06 11:20:44.175860] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.893 Malloc0 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.893 Delay0 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.893 [2024-10-06 11:20:44.343907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.893 [2024-10-06 11:20:44.369286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.893 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:48.262 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:48.262 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:25:48.262 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:48.262 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:48.262 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:25:50.179 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:50.179 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:50.179 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:50.179 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:50.179 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:50.179 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:25:50.179 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2133533 00:25:50.179 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:50.179 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:50.179 [global] 00:25:50.179 thread=1 00:25:50.179 invalidate=1 00:25:50.179 rw=write 00:25:50.179 time_based=1 00:25:50.179 runtime=60 00:25:50.179 ioengine=libaio 00:25:50.179 direct=1 00:25:50.179 bs=4096 00:25:50.179 iodepth=1 00:25:50.179 norandommap=0 00:25:50.179 numjobs=1 00:25:50.179 00:25:50.179 verify_dump=1 00:25:50.179 verify_backlog=512 00:25:50.179 verify_state_save=0 00:25:50.179 do_verify=1 00:25:50.179 verify=crc32c-intel 00:25:50.179 [job0] 00:25:50.179 filename=/dev/nvme0n1 00:25:50.179 Could not set queue depth (nvme0n1) 00:25:50.444 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:50.444 fio-3.35 00:25:50.444 Starting 1 thread 00:25:52.964 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:52.964 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.964 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.964 true 00:25:52.964 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.964 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:52.964 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.964 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.964 true 00:25:52.964 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.964 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:52.964 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.964 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.964 true 00:25:52.964 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.964 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:52.964 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.964 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.964 true 00:25:53.221 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.221 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:56.498 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:56.498 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.498 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:56.498 true 00:25:56.498 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.498 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:56.498 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.498 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:56.498 true 00:25:56.498 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.498 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:56.498 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.498 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:56.498 true 00:25:56.498 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.498 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:56.498 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.498 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:56.498 true 00:25:56.498 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.498 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:56.498 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2133533 00:26:52.689 00:26:52.689 job0: (groupid=0, jobs=1): err= 0: pid=2133764: Sun Oct 6 11:21:47 2024 00:26:52.689 read: IOPS=14, BW=59.8KiB/s (61.2kB/s)(3588KiB/60024msec) 00:26:52.689 slat (usec): min=7, max=16014, avg=40.51, stdev=577.56 00:26:52.689 clat (usec): min=289, max=41540k, avg=66566.53, stdev=1386437.69 00:26:52.689 lat (usec): min=297, max=41540k, avg=66607.04, stdev=1386436.93 00:26:52.689 clat percentiles (usec): 00:26:52.689 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 318], 00:26:52.689 | 20.00th=[ 326], 30.00th=[ 334], 40.00th=[ 347], 00:26:52.689 | 50.00th=[ 408], 60.00th=[ 41157], 70.00th=[ 41157], 00:26:52.689 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:26:52.689 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[17112761], 00:26:52.689 | 99.95th=[17112761], 99.99th=[17112761] 00:26:52.689 write: IOPS=17, BW=68.2KiB/s (69.9kB/s)(4096KiB/60024msec); 0 zone resets 00:26:52.689 slat (usec): min=9, max=27242, avg=37.91, stdev=850.98 00:26:52.689 clat (usec): min=195, max=436, avg=225.06, stdev=15.48 00:26:52.689 lat (usec): min=206, max=27634, avg=262.98, stdev=856.34 00:26:52.689 clat percentiles (usec): 00:26:52.689 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:26:52.689 | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 00:26:52.689 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 241], 95.00th=[ 245], 00:26:52.689 | 99.00th=[ 262], 99.50th=[ 277], 99.90th=[ 392], 99.95th=[ 437], 00:26:52.689 | 99.99th=[ 437] 00:26:52.689 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:26:52.689 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:26:52.689 lat (usec) : 250=51.69%, 500=25.09%, 750=0.26%, 1000=0.05% 00:26:52.689 lat (msec) : 50=22.85%, >=2000=0.05% 00:26:52.689 cpu : usr=0.03%, sys=0.06%, ctx=1929, majf=0, minf=1 00:26:52.689 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.689 issued rwts: total=897,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.689 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:52.689 00:26:52.689 Run status group 0 (all jobs): 00:26:52.689 READ: bw=59.8KiB/s (61.2kB/s), 59.8KiB/s-59.8KiB/s (61.2kB/s-61.2kB/s), io=3588KiB (3674kB), run=60024-60024msec 00:26:52.689 WRITE: bw=68.2KiB/s (69.9kB/s), 68.2KiB/s-68.2KiB/s (69.9kB/s-69.9kB/s), io=4096KiB (4194kB), run=60024-60024msec 00:26:52.689 00:26:52.690 Disk stats (read/write): 00:26:52.690 nvme0n1: ios=944/1024, merge=0/0, ticks=19319/218, in_queue=19537, util=100.00% 00:26:52.690 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:52.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:52.690 nvmf hotplug test: fio successful as expected 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:52.690 rmmod nvme_tcp 00:26:52.690 rmmod nvme_fabrics 00:26:52.690 rmmod nvme_keyring 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@515 -- # '[' -n 2133043 ']' 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # killprocess 2133043 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 2133043 ']' 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 2133043 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2133043 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2133043' 00:26:52.690 killing process with pid 2133043 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 2133043 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 2133043 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-save 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-restore 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.690 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.949 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:52.949 00:26:52.949 real 1m12.007s 00:26:52.949 user 4m22.068s 00:26:52.949 sys 0m5.965s 00:26:52.949 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:52.949 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.949 ************************************ 00:26:52.949 END TEST nvmf_initiator_timeout 00:26:52.949 ************************************ 00:26:53.205 11:21:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:26:53.205 11:21:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:26:53.205 11:21:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:26:53.205 11:21:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:26:53.205 11:21:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:58.495 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.495 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:26:58.495 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:58.495 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:58.495 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:58.495 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:58.495 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:58.495 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:26:58.495 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:58.495 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:26:58.495 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:26:58.495 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:26:58.495 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:26:58.495 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:26:58.495 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:58.496 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:58.496 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:58.496 Found net devices under 0000:af:00.0: cvl_0_0 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:58.496 Found net devices under 0000:af:00.1: cvl_0_1 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:58.496 ************************************ 00:26:58.496 START TEST nvmf_perf_adq 00:26:58.496 ************************************ 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:58.496 * Looking for test storage... 00:26:58.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:58.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.496 --rc genhtml_branch_coverage=1 00:26:58.496 --rc genhtml_function_coverage=1 00:26:58.496 --rc genhtml_legend=1 00:26:58.496 --rc geninfo_all_blocks=1 00:26:58.496 --rc geninfo_unexecuted_blocks=1 00:26:58.496 00:26:58.496 ' 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:58.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.496 --rc genhtml_branch_coverage=1 00:26:58.496 --rc genhtml_function_coverage=1 00:26:58.496 --rc genhtml_legend=1 00:26:58.496 --rc geninfo_all_blocks=1 00:26:58.496 --rc geninfo_unexecuted_blocks=1 00:26:58.496 00:26:58.496 ' 00:26:58.496 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:58.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.497 --rc genhtml_branch_coverage=1 00:26:58.497 --rc genhtml_function_coverage=1 00:26:58.497 --rc genhtml_legend=1 00:26:58.497 --rc geninfo_all_blocks=1 00:26:58.497 --rc geninfo_unexecuted_blocks=1 00:26:58.497 00:26:58.497 ' 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:58.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.497 --rc genhtml_branch_coverage=1 00:26:58.497 --rc genhtml_function_coverage=1 00:26:58.497 --rc genhtml_legend=1 00:26:58.497 --rc geninfo_all_blocks=1 00:26:58.497 --rc geninfo_unexecuted_blocks=1 00:26:58.497 00:26:58.497 ' 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:58.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:58.497 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:02.678 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.678 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:02.679 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:02.679 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:02.679 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:02.679 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:02.679 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:02.679 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:02.679 Found net devices under 0000:af:00.0: cvl_0_0 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:02.679 Found net devices under 0000:af:00.1: cvl_0_1 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:02.679 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:03.613 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:05.513 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.781 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:10.782 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:10.782 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:10.782 Found net devices under 0000:af:00.0: cvl_0_0 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:10.782 Found net devices under 0000:af:00.1: cvl_0_1 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:10.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:27:10.782 00:27:10.782 --- 10.0.0.2 ping statistics --- 00:27:10.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.782 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:27:10.782 00:27:10.782 --- 10.0.0.1 ping statistics --- 00:27:10.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.782 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=2151436 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 2151436 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2151436 ']' 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:10.782 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.783 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:10.783 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:10.783 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.039 [2024-10-06 11:22:08.372930] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:27:11.039 [2024-10-06 11:22:08.372973] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:11.039 [2024-10-06 11:22:08.435437] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:11.039 [2024-10-06 11:22:08.475264] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:11.039 [2024-10-06 11:22:08.475303] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:11.039 [2024-10-06 11:22:08.475311] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:11.040 [2024-10-06 11:22:08.475317] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:11.040 [2024-10-06 11:22:08.475322] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:11.040 [2024-10-06 11:22:08.476796] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.040 [2024-10-06 11:22:08.476811] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:11.040 [2024-10-06 11:22:08.476898] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:11.040 [2024-10-06 11:22:08.476900] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.040 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:11.040 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:27:11.040 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:11.040 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:11.040 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.040 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:11.040 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:11.040 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:11.040 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:11.040 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.040 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.040 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.040 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:11.040 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:11.040 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.040 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.297 [2024-10-06 11:22:08.719625] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.297 Malloc1 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.297 [2024-10-06 11:22:08.771069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2151676 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:11.297 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:13.824 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:13.824 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.824 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.825 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.825 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:13.825 "tick_rate": 2100000000, 00:27:13.825 "poll_groups": [ 00:27:13.825 { 00:27:13.825 "name": "nvmf_tgt_poll_group_000", 00:27:13.825 "admin_qpairs": 1, 00:27:13.825 "io_qpairs": 1, 00:27:13.825 "current_admin_qpairs": 1, 00:27:13.825 "current_io_qpairs": 1, 00:27:13.825 "pending_bdev_io": 0, 00:27:13.825 "completed_nvme_io": 19114, 00:27:13.825 "transports": [ 00:27:13.825 { 00:27:13.825 "trtype": "TCP" 00:27:13.825 } 00:27:13.825 ] 00:27:13.825 }, 00:27:13.825 { 00:27:13.825 "name": "nvmf_tgt_poll_group_001", 00:27:13.825 "admin_qpairs": 0, 00:27:13.825 "io_qpairs": 1, 00:27:13.825 "current_admin_qpairs": 0, 00:27:13.825 "current_io_qpairs": 1, 00:27:13.825 "pending_bdev_io": 0, 00:27:13.825 "completed_nvme_io": 19484, 00:27:13.825 "transports": [ 00:27:13.825 { 00:27:13.825 "trtype": "TCP" 00:27:13.825 } 00:27:13.825 ] 00:27:13.825 }, 00:27:13.825 { 00:27:13.825 "name": "nvmf_tgt_poll_group_002", 00:27:13.825 "admin_qpairs": 0, 00:27:13.825 "io_qpairs": 1, 00:27:13.825 "current_admin_qpairs": 0, 00:27:13.825 "current_io_qpairs": 1, 00:27:13.825 "pending_bdev_io": 0, 00:27:13.825 "completed_nvme_io": 19619, 00:27:13.825 "transports": [ 00:27:13.825 { 00:27:13.825 "trtype": "TCP" 00:27:13.825 } 00:27:13.825 ] 00:27:13.825 }, 00:27:13.825 { 00:27:13.825 "name": "nvmf_tgt_poll_group_003", 00:27:13.825 "admin_qpairs": 0, 00:27:13.825 "io_qpairs": 1, 00:27:13.825 "current_admin_qpairs": 0, 00:27:13.825 "current_io_qpairs": 1, 00:27:13.825 "pending_bdev_io": 0, 00:27:13.825 "completed_nvme_io": 18795, 00:27:13.825 "transports": [ 00:27:13.825 { 00:27:13.825 "trtype": "TCP" 00:27:13.825 } 00:27:13.825 ] 00:27:13.825 } 00:27:13.825 ] 00:27:13.825 }' 00:27:13.825 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:13.825 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:13.825 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:13.825 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:13.825 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2151676 00:27:21.932 Initializing NVMe Controllers 00:27:21.932 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:21.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:21.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:21.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:21.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:21.933 Initialization complete. Launching workers. 00:27:21.933 ======================================================== 00:27:21.933 Latency(us) 00:27:21.933 Device Information : IOPS MiB/s Average min max 00:27:21.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10027.50 39.17 6384.36 2078.00 11298.29 00:27:21.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10224.80 39.94 6258.96 1655.85 13112.86 00:27:21.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10366.90 40.50 6173.39 2158.10 10818.44 00:27:21.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10172.10 39.73 6293.60 2325.44 10920.82 00:27:21.933 ======================================================== 00:27:21.933 Total : 40791.29 159.34 6276.68 1655.85 13112.86 00:27:21.933 00:27:21.933 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:27:21.933 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:21.933 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:21.933 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:21.933 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:21.933 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:21.933 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:21.933 rmmod nvme_tcp 00:27:21.933 rmmod nvme_fabrics 00:27:21.933 rmmod nvme_keyring 00:27:21.933 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:21.933 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:21.933 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:21.933 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 2151436 ']' 00:27:21.933 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 2151436 00:27:21.933 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2151436 ']' 00:27:21.933 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2151436 00:27:21.933 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:27:21.933 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:21.933 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2151436 00:27:21.933 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:21.933 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:21.933 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2151436' 00:27:21.933 killing process with pid 2151436 00:27:21.933 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2151436 00:27:21.933 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2151436 00:27:21.933 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:21.933 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:21.933 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:21.933 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:21.933 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:27:21.933 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:21.933 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:27:21.933 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:21.933 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:21.933 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.933 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.933 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.835 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:23.835 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:27:23.835 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:23.835 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:25.208 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:27.109 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:32.380 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:32.380 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:32.380 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:32.381 Found net devices under 0000:af:00.0: cvl_0_0 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:32.381 Found net devices under 0000:af:00.1: cvl_0_1 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:32.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:27:32.381 00:27:32.381 --- 10.0.0.2 ping statistics --- 00:27:32.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.381 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:32.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:27:32.381 00:27:32.381 --- 10.0.0.1 ping statistics --- 00:27:32.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.381 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:32.381 net.core.busy_poll = 1 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:32.381 net.core.busy_read = 1 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=2155405 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 2155405 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2155405 ']' 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:32.381 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.381 [2024-10-06 11:22:29.856758] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:27:32.381 [2024-10-06 11:22:29.856802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.381 [2024-10-06 11:22:29.916816] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:32.739 [2024-10-06 11:22:29.956048] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.739 [2024-10-06 11:22:29.956091] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.739 [2024-10-06 11:22:29.956102] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.739 [2024-10-06 11:22:29.956108] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.739 [2024-10-06 11:22:29.956113] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.739 [2024-10-06 11:22:29.957526] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.739 [2024-10-06 11:22:29.957562] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:32.739 [2024-10-06 11:22:29.957634] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:32.739 [2024-10-06 11:22:29.957635] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.739 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:32.739 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:27:32.739 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:32.739 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:32.739 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.739 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:32.739 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:27:32.739 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:32.739 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:32.739 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.739 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.739 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.739 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:32.739 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:32.739 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.740 [2024-10-06 11:22:30.179682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.740 Malloc1 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.740 [2024-10-06 11:22:30.225866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2155501 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:27:32.740 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:34.692 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:27:34.692 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.692 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.692 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.692 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:27:34.692 "tick_rate": 2100000000, 00:27:34.692 "poll_groups": [ 00:27:34.692 { 00:27:34.692 "name": "nvmf_tgt_poll_group_000", 00:27:34.692 "admin_qpairs": 1, 00:27:34.692 "io_qpairs": 1, 00:27:34.692 "current_admin_qpairs": 1, 00:27:34.692 "current_io_qpairs": 1, 00:27:34.692 "pending_bdev_io": 0, 00:27:34.692 "completed_nvme_io": 26974, 00:27:34.692 "transports": [ 00:27:34.692 { 00:27:34.692 "trtype": "TCP" 00:27:34.692 } 00:27:34.692 ] 00:27:34.692 }, 00:27:34.692 { 00:27:34.692 "name": "nvmf_tgt_poll_group_001", 00:27:34.692 "admin_qpairs": 0, 00:27:34.692 "io_qpairs": 3, 00:27:34.692 "current_admin_qpairs": 0, 00:27:34.692 "current_io_qpairs": 3, 00:27:34.692 "pending_bdev_io": 0, 00:27:34.692 "completed_nvme_io": 29481, 00:27:34.692 "transports": [ 00:27:34.692 { 00:27:34.692 "trtype": "TCP" 00:27:34.692 } 00:27:34.692 ] 00:27:34.692 }, 00:27:34.692 { 00:27:34.692 "name": "nvmf_tgt_poll_group_002", 00:27:34.692 "admin_qpairs": 0, 00:27:34.692 "io_qpairs": 0, 00:27:34.692 "current_admin_qpairs": 0, 00:27:34.692 "current_io_qpairs": 0, 00:27:34.692 "pending_bdev_io": 0, 00:27:34.692 "completed_nvme_io": 0, 00:27:34.692 "transports": [ 00:27:34.692 { 00:27:34.692 "trtype": "TCP" 00:27:34.692 } 00:27:34.692 ] 00:27:34.692 }, 00:27:34.692 { 00:27:34.692 "name": "nvmf_tgt_poll_group_003", 00:27:34.692 "admin_qpairs": 0, 00:27:34.692 "io_qpairs": 0, 00:27:34.692 "current_admin_qpairs": 0, 00:27:34.692 "current_io_qpairs": 0, 00:27:34.692 "pending_bdev_io": 0, 00:27:34.692 "completed_nvme_io": 0, 00:27:34.692 "transports": [ 00:27:34.692 { 00:27:34.692 "trtype": "TCP" 00:27:34.692 } 00:27:34.692 ] 00:27:34.692 } 00:27:34.692 ] 00:27:34.692 }' 00:27:34.692 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:34.692 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:27:34.949 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:27:34.949 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:27:34.949 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2155501 00:27:43.048 Initializing NVMe Controllers 00:27:43.048 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:43.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:43.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:43.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:43.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:43.048 Initialization complete. Launching workers. 00:27:43.048 ======================================================== 00:27:43.048 Latency(us) 00:27:43.048 Device Information : IOPS MiB/s Average min max 00:27:43.048 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 14662.59 57.28 4364.24 1225.63 6516.06 00:27:43.048 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5229.26 20.43 12238.22 1585.70 60038.16 00:27:43.048 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5113.26 19.97 12552.99 1547.92 58056.90 00:27:43.048 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5561.16 21.72 11552.26 1388.86 57721.17 00:27:43.048 ======================================================== 00:27:43.048 Total : 30566.28 119.40 8388.94 1225.63 60038.16 00:27:43.048 00:27:43.048 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:27:43.048 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:43.048 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:43.048 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:43.048 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:43.048 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:43.048 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:43.048 rmmod nvme_tcp 00:27:43.048 rmmod nvme_fabrics 00:27:43.048 rmmod nvme_keyring 00:27:43.048 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:43.048 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:43.048 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:43.048 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 2155405 ']' 00:27:43.049 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 2155405 00:27:43.049 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2155405 ']' 00:27:43.049 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2155405 00:27:43.049 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:27:43.049 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:43.049 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2155405 00:27:43.049 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:43.049 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:43.049 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2155405' 00:27:43.049 killing process with pid 2155405 00:27:43.049 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2155405 00:27:43.049 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2155405 00:27:43.307 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:43.307 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:43.307 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:43.307 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:43.307 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:27:43.307 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:27:43.307 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:43.307 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:43.307 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:43.307 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.307 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.307 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.592 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:46.592 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:27:46.592 00:27:46.592 real 0m48.607s 00:27:46.592 user 2m43.580s 00:27:46.592 sys 0m9.266s 00:27:46.592 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:46.592 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:46.592 ************************************ 00:27:46.592 END TEST nvmf_perf_adq 00:27:46.592 ************************************ 00:27:46.592 11:22:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:46.592 11:22:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:46.592 11:22:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:46.592 11:22:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:46.592 ************************************ 00:27:46.592 START TEST nvmf_shutdown 00:27:46.592 ************************************ 00:27:46.592 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:46.592 * Looking for test storage... 00:27:46.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:46.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.592 --rc genhtml_branch_coverage=1 00:27:46.592 --rc genhtml_function_coverage=1 00:27:46.592 --rc genhtml_legend=1 00:27:46.592 --rc geninfo_all_blocks=1 00:27:46.592 --rc geninfo_unexecuted_blocks=1 00:27:46.592 00:27:46.592 ' 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:46.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.592 --rc genhtml_branch_coverage=1 00:27:46.592 --rc genhtml_function_coverage=1 00:27:46.592 --rc genhtml_legend=1 00:27:46.592 --rc geninfo_all_blocks=1 00:27:46.592 --rc geninfo_unexecuted_blocks=1 00:27:46.592 00:27:46.592 ' 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:46.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.592 --rc genhtml_branch_coverage=1 00:27:46.592 --rc genhtml_function_coverage=1 00:27:46.592 --rc genhtml_legend=1 00:27:46.592 --rc geninfo_all_blocks=1 00:27:46.592 --rc geninfo_unexecuted_blocks=1 00:27:46.592 00:27:46.592 ' 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:46.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.592 --rc genhtml_branch_coverage=1 00:27:46.592 --rc genhtml_function_coverage=1 00:27:46.592 --rc genhtml_legend=1 00:27:46.592 --rc geninfo_all_blocks=1 00:27:46.592 --rc geninfo_unexecuted_blocks=1 00:27:46.592 00:27:46.592 ' 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:46.592 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:46.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:46.593 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:46.851 ************************************ 00:27:46.851 START TEST nvmf_shutdown_tc1 00:27:46.851 ************************************ 00:27:46.851 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:27:46.851 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:27:46.851 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:46.851 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:46.851 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:46.851 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:46.851 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:46.851 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:46.851 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.851 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:46.851 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.851 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:46.851 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:46.851 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:46.851 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:52.113 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:52.114 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:52.114 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:52.114 Found net devices under 0000:af:00.0: cvl_0_0 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:52.114 Found net devices under 0000:af:00.1: cvl_0_1 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.114 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:52.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:27:52.115 00:27:52.115 --- 10.0.0.2 ping statistics --- 00:27:52.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.115 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:27:52.115 00:27:52.115 --- 10.0.0.1 ping statistics --- 00:27:52.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.115 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=2160811 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 2160811 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2160811 ']' 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:52.115 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:52.115 [2024-10-06 11:22:49.679232] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:27:52.115 [2024-10-06 11:22:49.679275] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.373 [2024-10-06 11:22:49.732026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:52.373 [2024-10-06 11:22:49.771945] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.373 [2024-10-06 11:22:49.771982] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.373 [2024-10-06 11:22:49.771989] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.373 [2024-10-06 11:22:49.771995] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.373 [2024-10-06 11:22:49.772000] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.373 [2024-10-06 11:22:49.773550] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.373 [2024-10-06 11:22:49.773638] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.373 [2024-10-06 11:22:49.773746] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.373 [2024-10-06 11:22:49.773746] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:52.373 [2024-10-06 11:22:49.914980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.373 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:52.631 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.631 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:52.631 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.631 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:52.631 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.631 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:52.631 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.631 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:52.631 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.631 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:52.631 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.631 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:52.631 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:52.631 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.631 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:52.631 Malloc1 00:27:52.631 [2024-10-06 11:22:50.014625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.631 Malloc2 00:27:52.631 Malloc3 00:27:52.631 Malloc4 00:27:52.631 Malloc5 00:27:52.631 Malloc6 00:27:52.889 Malloc7 00:27:52.889 Malloc8 00:27:52.889 Malloc9 00:27:52.889 Malloc10 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2160900 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2160900 /var/tmp/bdevperf.sock 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2160900 ']' 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:52.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:52.889 { 00:27:52.889 "params": { 00:27:52.889 "name": "Nvme$subsystem", 00:27:52.889 "trtype": "$TEST_TRANSPORT", 00:27:52.889 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.889 "adrfam": "ipv4", 00:27:52.889 "trsvcid": "$NVMF_PORT", 00:27:52.889 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.889 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.889 "hdgst": ${hdgst:-false}, 00:27:52.889 "ddgst": ${ddgst:-false} 00:27:52.889 }, 00:27:52.889 "method": "bdev_nvme_attach_controller" 00:27:52.889 } 00:27:52.889 EOF 00:27:52.889 )") 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:52.889 { 00:27:52.889 "params": { 00:27:52.889 "name": "Nvme$subsystem", 00:27:52.889 "trtype": "$TEST_TRANSPORT", 00:27:52.889 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.889 "adrfam": "ipv4", 00:27:52.889 "trsvcid": "$NVMF_PORT", 00:27:52.889 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.889 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.889 "hdgst": ${hdgst:-false}, 00:27:52.889 "ddgst": ${ddgst:-false} 00:27:52.889 }, 00:27:52.889 "method": "bdev_nvme_attach_controller" 00:27:52.889 } 00:27:52.889 EOF 00:27:52.889 )") 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:52.889 { 00:27:52.889 "params": { 00:27:52.889 "name": "Nvme$subsystem", 00:27:52.889 "trtype": "$TEST_TRANSPORT", 00:27:52.889 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.889 "adrfam": "ipv4", 00:27:52.889 "trsvcid": "$NVMF_PORT", 00:27:52.889 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.889 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.889 "hdgst": ${hdgst:-false}, 00:27:52.889 "ddgst": ${ddgst:-false} 00:27:52.889 }, 00:27:52.889 "method": "bdev_nvme_attach_controller" 00:27:52.889 } 00:27:52.889 EOF 00:27:52.889 )") 00:27:52.889 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:53.147 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:53.147 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:53.147 { 00:27:53.147 "params": { 00:27:53.147 "name": "Nvme$subsystem", 00:27:53.147 "trtype": "$TEST_TRANSPORT", 00:27:53.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.147 "adrfam": "ipv4", 00:27:53.147 "trsvcid": "$NVMF_PORT", 00:27:53.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.147 "hdgst": ${hdgst:-false}, 00:27:53.147 "ddgst": ${ddgst:-false} 00:27:53.147 }, 00:27:53.147 "method": "bdev_nvme_attach_controller" 00:27:53.147 } 00:27:53.147 EOF 00:27:53.147 )") 00:27:53.147 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:53.147 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:53.147 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:53.147 { 00:27:53.147 "params": { 00:27:53.147 "name": "Nvme$subsystem", 00:27:53.147 "trtype": "$TEST_TRANSPORT", 00:27:53.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.147 "adrfam": "ipv4", 00:27:53.147 "trsvcid": "$NVMF_PORT", 00:27:53.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.147 "hdgst": ${hdgst:-false}, 00:27:53.147 "ddgst": ${ddgst:-false} 00:27:53.147 }, 00:27:53.147 "method": "bdev_nvme_attach_controller" 00:27:53.147 } 00:27:53.147 EOF 00:27:53.147 )") 00:27:53.147 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:53.147 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:53.147 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:53.147 { 00:27:53.147 "params": { 00:27:53.147 "name": "Nvme$subsystem", 00:27:53.147 "trtype": "$TEST_TRANSPORT", 00:27:53.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.147 "adrfam": "ipv4", 00:27:53.147 "trsvcid": "$NVMF_PORT", 00:27:53.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.147 "hdgst": ${hdgst:-false}, 00:27:53.147 "ddgst": ${ddgst:-false} 00:27:53.147 }, 00:27:53.147 "method": "bdev_nvme_attach_controller" 00:27:53.147 } 00:27:53.147 EOF 00:27:53.147 )") 00:27:53.147 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:53.147 [2024-10-06 11:22:50.486528] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:27:53.147 [2024-10-06 11:22:50.486577] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:53.147 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:53.147 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:53.147 { 00:27:53.147 "params": { 00:27:53.147 "name": "Nvme$subsystem", 00:27:53.147 "trtype": "$TEST_TRANSPORT", 00:27:53.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.147 "adrfam": "ipv4", 00:27:53.147 "trsvcid": "$NVMF_PORT", 00:27:53.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.147 "hdgst": ${hdgst:-false}, 00:27:53.148 "ddgst": ${ddgst:-false} 00:27:53.148 }, 00:27:53.148 "method": "bdev_nvme_attach_controller" 00:27:53.148 } 00:27:53.148 EOF 00:27:53.148 )") 00:27:53.148 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:53.148 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:53.148 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:53.148 { 00:27:53.148 "params": { 00:27:53.148 "name": "Nvme$subsystem", 00:27:53.148 "trtype": "$TEST_TRANSPORT", 00:27:53.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.148 "adrfam": "ipv4", 00:27:53.148 "trsvcid": "$NVMF_PORT", 00:27:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.148 "hdgst": ${hdgst:-false}, 00:27:53.148 "ddgst": ${ddgst:-false} 00:27:53.148 }, 00:27:53.148 "method": "bdev_nvme_attach_controller" 00:27:53.148 } 00:27:53.148 EOF 00:27:53.148 )") 00:27:53.148 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:53.148 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:53.148 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:53.148 { 00:27:53.148 "params": { 00:27:53.148 "name": "Nvme$subsystem", 00:27:53.148 "trtype": "$TEST_TRANSPORT", 00:27:53.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.148 "adrfam": "ipv4", 00:27:53.148 "trsvcid": "$NVMF_PORT", 00:27:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.148 "hdgst": ${hdgst:-false}, 00:27:53.148 "ddgst": ${ddgst:-false} 00:27:53.148 }, 00:27:53.148 "method": "bdev_nvme_attach_controller" 00:27:53.148 } 00:27:53.148 EOF 00:27:53.148 )") 00:27:53.148 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:53.148 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:53.148 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:53.148 { 00:27:53.148 "params": { 00:27:53.148 "name": "Nvme$subsystem", 00:27:53.148 "trtype": "$TEST_TRANSPORT", 00:27:53.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.148 "adrfam": "ipv4", 00:27:53.148 "trsvcid": "$NVMF_PORT", 00:27:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.148 "hdgst": ${hdgst:-false}, 00:27:53.148 "ddgst": ${ddgst:-false} 00:27:53.148 }, 00:27:53.148 "method": "bdev_nvme_attach_controller" 00:27:53.148 } 00:27:53.148 EOF 00:27:53.148 )") 00:27:53.148 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:53.148 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:27:53.148 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:27:53.148 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:27:53.148 "params": { 00:27:53.148 "name": "Nvme1", 00:27:53.148 "trtype": "tcp", 00:27:53.148 "traddr": "10.0.0.2", 00:27:53.148 "adrfam": "ipv4", 00:27:53.148 "trsvcid": "4420", 00:27:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:53.148 "hdgst": false, 00:27:53.148 "ddgst": false 00:27:53.148 }, 00:27:53.148 "method": "bdev_nvme_attach_controller" 00:27:53.148 },{ 00:27:53.148 "params": { 00:27:53.148 "name": "Nvme2", 00:27:53.148 "trtype": "tcp", 00:27:53.148 "traddr": "10.0.0.2", 00:27:53.148 "adrfam": "ipv4", 00:27:53.148 "trsvcid": "4420", 00:27:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:53.148 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:53.148 "hdgst": false, 00:27:53.148 "ddgst": false 00:27:53.148 }, 00:27:53.148 "method": "bdev_nvme_attach_controller" 00:27:53.148 },{ 00:27:53.148 "params": { 00:27:53.148 "name": "Nvme3", 00:27:53.148 "trtype": "tcp", 00:27:53.148 "traddr": "10.0.0.2", 00:27:53.148 "adrfam": "ipv4", 00:27:53.148 "trsvcid": "4420", 00:27:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:53.148 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:53.148 "hdgst": false, 00:27:53.148 "ddgst": false 00:27:53.148 }, 00:27:53.148 "method": "bdev_nvme_attach_controller" 00:27:53.148 },{ 00:27:53.148 "params": { 00:27:53.148 "name": "Nvme4", 00:27:53.148 "trtype": "tcp", 00:27:53.148 "traddr": "10.0.0.2", 00:27:53.148 "adrfam": "ipv4", 00:27:53.148 "trsvcid": "4420", 00:27:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:53.148 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:53.148 "hdgst": false, 00:27:53.148 "ddgst": false 00:27:53.148 }, 00:27:53.148 "method": "bdev_nvme_attach_controller" 00:27:53.148 },{ 00:27:53.148 "params": { 00:27:53.148 "name": "Nvme5", 00:27:53.148 "trtype": "tcp", 00:27:53.148 "traddr": "10.0.0.2", 00:27:53.148 "adrfam": "ipv4", 00:27:53.148 "trsvcid": "4420", 00:27:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:53.148 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:53.148 "hdgst": false, 00:27:53.148 "ddgst": false 00:27:53.148 }, 00:27:53.148 "method": "bdev_nvme_attach_controller" 00:27:53.148 },{ 00:27:53.148 "params": { 00:27:53.148 "name": "Nvme6", 00:27:53.148 "trtype": "tcp", 00:27:53.148 "traddr": "10.0.0.2", 00:27:53.148 "adrfam": "ipv4", 00:27:53.148 "trsvcid": "4420", 00:27:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:53.148 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:53.148 "hdgst": false, 00:27:53.148 "ddgst": false 00:27:53.148 }, 00:27:53.148 "method": "bdev_nvme_attach_controller" 00:27:53.148 },{ 00:27:53.148 "params": { 00:27:53.148 "name": "Nvme7", 00:27:53.148 "trtype": "tcp", 00:27:53.148 "traddr": "10.0.0.2", 00:27:53.148 "adrfam": "ipv4", 00:27:53.148 "trsvcid": "4420", 00:27:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:53.148 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:53.148 "hdgst": false, 00:27:53.148 "ddgst": false 00:27:53.148 }, 00:27:53.148 "method": "bdev_nvme_attach_controller" 00:27:53.148 },{ 00:27:53.148 "params": { 00:27:53.148 "name": "Nvme8", 00:27:53.148 "trtype": "tcp", 00:27:53.148 "traddr": "10.0.0.2", 00:27:53.148 "adrfam": "ipv4", 00:27:53.148 "trsvcid": "4420", 00:27:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:53.148 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:53.148 "hdgst": false, 00:27:53.148 "ddgst": false 00:27:53.148 }, 00:27:53.148 "method": "bdev_nvme_attach_controller" 00:27:53.148 },{ 00:27:53.148 "params": { 00:27:53.148 "name": "Nvme9", 00:27:53.148 "trtype": "tcp", 00:27:53.149 "traddr": "10.0.0.2", 00:27:53.149 "adrfam": "ipv4", 00:27:53.149 "trsvcid": "4420", 00:27:53.149 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:53.149 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:53.149 "hdgst": false, 00:27:53.149 "ddgst": false 00:27:53.149 }, 00:27:53.149 "method": "bdev_nvme_attach_controller" 00:27:53.149 },{ 00:27:53.149 "params": { 00:27:53.149 "name": "Nvme10", 00:27:53.149 "trtype": "tcp", 00:27:53.149 "traddr": "10.0.0.2", 00:27:53.149 "adrfam": "ipv4", 00:27:53.149 "trsvcid": "4420", 00:27:53.149 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:53.149 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:53.149 "hdgst": false, 00:27:53.149 "ddgst": false 00:27:53.149 }, 00:27:53.149 "method": "bdev_nvme_attach_controller" 00:27:53.149 }' 00:27:53.149 [2024-10-06 11:22:50.543468] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.149 [2024-10-06 11:22:50.582426] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.045 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:55.045 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:27:55.045 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:55.045 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.045 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:55.045 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.045 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2160900 00:27:55.045 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:27:55.045 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:27:55.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2160900 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2160811 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:55.978 { 00:27:55.978 "params": { 00:27:55.978 "name": "Nvme$subsystem", 00:27:55.978 "trtype": "$TEST_TRANSPORT", 00:27:55.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.978 "adrfam": "ipv4", 00:27:55.978 "trsvcid": "$NVMF_PORT", 00:27:55.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.978 "hdgst": ${hdgst:-false}, 00:27:55.978 "ddgst": ${ddgst:-false} 00:27:55.978 }, 00:27:55.978 "method": "bdev_nvme_attach_controller" 00:27:55.978 } 00:27:55.978 EOF 00:27:55.978 )") 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:55.978 { 00:27:55.978 "params": { 00:27:55.978 "name": "Nvme$subsystem", 00:27:55.978 "trtype": "$TEST_TRANSPORT", 00:27:55.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.978 "adrfam": "ipv4", 00:27:55.978 "trsvcid": "$NVMF_PORT", 00:27:55.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.978 "hdgst": ${hdgst:-false}, 00:27:55.978 "ddgst": ${ddgst:-false} 00:27:55.978 }, 00:27:55.978 "method": "bdev_nvme_attach_controller" 00:27:55.978 } 00:27:55.978 EOF 00:27:55.978 )") 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:55.978 { 00:27:55.978 "params": { 00:27:55.978 "name": "Nvme$subsystem", 00:27:55.978 "trtype": "$TEST_TRANSPORT", 00:27:55.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.978 "adrfam": "ipv4", 00:27:55.978 "trsvcid": "$NVMF_PORT", 00:27:55.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.978 "hdgst": ${hdgst:-false}, 00:27:55.978 "ddgst": ${ddgst:-false} 00:27:55.978 }, 00:27:55.978 "method": "bdev_nvme_attach_controller" 00:27:55.978 } 00:27:55.978 EOF 00:27:55.978 )") 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:55.978 { 00:27:55.978 "params": { 00:27:55.978 "name": "Nvme$subsystem", 00:27:55.978 "trtype": "$TEST_TRANSPORT", 00:27:55.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.978 "adrfam": "ipv4", 00:27:55.978 "trsvcid": "$NVMF_PORT", 00:27:55.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.978 "hdgst": ${hdgst:-false}, 00:27:55.978 "ddgst": ${ddgst:-false} 00:27:55.978 }, 00:27:55.978 "method": "bdev_nvme_attach_controller" 00:27:55.978 } 00:27:55.978 EOF 00:27:55.978 )") 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:55.978 { 00:27:55.978 "params": { 00:27:55.978 "name": "Nvme$subsystem", 00:27:55.978 "trtype": "$TEST_TRANSPORT", 00:27:55.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.978 "adrfam": "ipv4", 00:27:55.978 "trsvcid": "$NVMF_PORT", 00:27:55.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.978 "hdgst": ${hdgst:-false}, 00:27:55.978 "ddgst": ${ddgst:-false} 00:27:55.978 }, 00:27:55.978 "method": "bdev_nvme_attach_controller" 00:27:55.978 } 00:27:55.978 EOF 00:27:55.978 )") 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:55.978 { 00:27:55.978 "params": { 00:27:55.978 "name": "Nvme$subsystem", 00:27:55.978 "trtype": "$TEST_TRANSPORT", 00:27:55.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.978 "adrfam": "ipv4", 00:27:55.978 "trsvcid": "$NVMF_PORT", 00:27:55.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.978 "hdgst": ${hdgst:-false}, 00:27:55.978 "ddgst": ${ddgst:-false} 00:27:55.978 }, 00:27:55.978 "method": "bdev_nvme_attach_controller" 00:27:55.978 } 00:27:55.978 EOF 00:27:55.978 )") 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:55.978 { 00:27:55.978 "params": { 00:27:55.978 "name": "Nvme$subsystem", 00:27:55.978 "trtype": "$TEST_TRANSPORT", 00:27:55.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.978 "adrfam": "ipv4", 00:27:55.978 "trsvcid": "$NVMF_PORT", 00:27:55.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.978 "hdgst": ${hdgst:-false}, 00:27:55.978 "ddgst": ${ddgst:-false} 00:27:55.978 }, 00:27:55.978 "method": "bdev_nvme_attach_controller" 00:27:55.978 } 00:27:55.978 EOF 00:27:55.978 )") 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:55.978 [2024-10-06 11:22:53.432535] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:27:55.978 [2024-10-06 11:22:53.432585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2161371 ] 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:55.978 { 00:27:55.978 "params": { 00:27:55.978 "name": "Nvme$subsystem", 00:27:55.978 "trtype": "$TEST_TRANSPORT", 00:27:55.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.978 "adrfam": "ipv4", 00:27:55.978 "trsvcid": "$NVMF_PORT", 00:27:55.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.978 "hdgst": ${hdgst:-false}, 00:27:55.978 "ddgst": ${ddgst:-false} 00:27:55.978 }, 00:27:55.978 "method": "bdev_nvme_attach_controller" 00:27:55.978 } 00:27:55.978 EOF 00:27:55.978 )") 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:55.978 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:55.978 { 00:27:55.978 "params": { 00:27:55.978 "name": "Nvme$subsystem", 00:27:55.978 "trtype": "$TEST_TRANSPORT", 00:27:55.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.978 "adrfam": "ipv4", 00:27:55.978 "trsvcid": "$NVMF_PORT", 00:27:55.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.978 "hdgst": ${hdgst:-false}, 00:27:55.979 "ddgst": ${ddgst:-false} 00:27:55.979 }, 00:27:55.979 "method": "bdev_nvme_attach_controller" 00:27:55.979 } 00:27:55.979 EOF 00:27:55.979 )") 00:27:55.979 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:55.979 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:55.979 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:55.979 { 00:27:55.979 "params": { 00:27:55.979 "name": "Nvme$subsystem", 00:27:55.979 "trtype": "$TEST_TRANSPORT", 00:27:55.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.979 "adrfam": "ipv4", 00:27:55.979 "trsvcid": "$NVMF_PORT", 00:27:55.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.979 "hdgst": ${hdgst:-false}, 00:27:55.979 "ddgst": ${ddgst:-false} 00:27:55.979 }, 00:27:55.979 "method": "bdev_nvme_attach_controller" 00:27:55.979 } 00:27:55.979 EOF 00:27:55.979 )") 00:27:55.979 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:55.979 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:27:55.979 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:27:55.979 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:27:55.979 "params": { 00:27:55.979 "name": "Nvme1", 00:27:55.979 "trtype": "tcp", 00:27:55.979 "traddr": "10.0.0.2", 00:27:55.979 "adrfam": "ipv4", 00:27:55.979 "trsvcid": "4420", 00:27:55.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:55.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:55.979 "hdgst": false, 00:27:55.979 "ddgst": false 00:27:55.979 }, 00:27:55.979 "method": "bdev_nvme_attach_controller" 00:27:55.979 },{ 00:27:55.979 "params": { 00:27:55.979 "name": "Nvme2", 00:27:55.979 "trtype": "tcp", 00:27:55.979 "traddr": "10.0.0.2", 00:27:55.979 "adrfam": "ipv4", 00:27:55.979 "trsvcid": "4420", 00:27:55.979 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:55.979 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:55.979 "hdgst": false, 00:27:55.979 "ddgst": false 00:27:55.979 }, 00:27:55.979 "method": "bdev_nvme_attach_controller" 00:27:55.979 },{ 00:27:55.979 "params": { 00:27:55.979 "name": "Nvme3", 00:27:55.979 "trtype": "tcp", 00:27:55.979 "traddr": "10.0.0.2", 00:27:55.979 "adrfam": "ipv4", 00:27:55.979 "trsvcid": "4420", 00:27:55.979 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:55.979 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:55.979 "hdgst": false, 00:27:55.979 "ddgst": false 00:27:55.979 }, 00:27:55.979 "method": "bdev_nvme_attach_controller" 00:27:55.979 },{ 00:27:55.979 "params": { 00:27:55.979 "name": "Nvme4", 00:27:55.979 "trtype": "tcp", 00:27:55.979 "traddr": "10.0.0.2", 00:27:55.979 "adrfam": "ipv4", 00:27:55.979 "trsvcid": "4420", 00:27:55.979 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:55.979 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:55.979 "hdgst": false, 00:27:55.979 "ddgst": false 00:27:55.979 }, 00:27:55.979 "method": "bdev_nvme_attach_controller" 00:27:55.979 },{ 00:27:55.979 "params": { 00:27:55.979 "name": "Nvme5", 00:27:55.979 "trtype": "tcp", 00:27:55.979 "traddr": "10.0.0.2", 00:27:55.979 "adrfam": "ipv4", 00:27:55.979 "trsvcid": "4420", 00:27:55.979 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:55.979 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:55.979 "hdgst": false, 00:27:55.979 "ddgst": false 00:27:55.979 }, 00:27:55.979 "method": "bdev_nvme_attach_controller" 00:27:55.979 },{ 00:27:55.979 "params": { 00:27:55.979 "name": "Nvme6", 00:27:55.979 "trtype": "tcp", 00:27:55.979 "traddr": "10.0.0.2", 00:27:55.979 "adrfam": "ipv4", 00:27:55.979 "trsvcid": "4420", 00:27:55.979 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:55.979 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:55.979 "hdgst": false, 00:27:55.979 "ddgst": false 00:27:55.979 }, 00:27:55.979 "method": "bdev_nvme_attach_controller" 00:27:55.979 },{ 00:27:55.979 "params": { 00:27:55.979 "name": "Nvme7", 00:27:55.979 "trtype": "tcp", 00:27:55.979 "traddr": "10.0.0.2", 00:27:55.979 "adrfam": "ipv4", 00:27:55.979 "trsvcid": "4420", 00:27:55.979 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:55.979 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:55.979 "hdgst": false, 00:27:55.979 "ddgst": false 00:27:55.979 }, 00:27:55.979 "method": "bdev_nvme_attach_controller" 00:27:55.979 },{ 00:27:55.979 "params": { 00:27:55.979 "name": "Nvme8", 00:27:55.979 "trtype": "tcp", 00:27:55.979 "traddr": "10.0.0.2", 00:27:55.979 "adrfam": "ipv4", 00:27:55.979 "trsvcid": "4420", 00:27:55.979 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:55.979 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:55.979 "hdgst": false, 00:27:55.979 "ddgst": false 00:27:55.979 }, 00:27:55.979 "method": "bdev_nvme_attach_controller" 00:27:55.979 },{ 00:27:55.979 "params": { 00:27:55.979 "name": "Nvme9", 00:27:55.979 "trtype": "tcp", 00:27:55.979 "traddr": "10.0.0.2", 00:27:55.979 "adrfam": "ipv4", 00:27:55.979 "trsvcid": "4420", 00:27:55.979 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:55.979 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:55.979 "hdgst": false, 00:27:55.979 "ddgst": false 00:27:55.979 }, 00:27:55.979 "method": "bdev_nvme_attach_controller" 00:27:55.979 },{ 00:27:55.979 "params": { 00:27:55.979 "name": "Nvme10", 00:27:55.979 "trtype": "tcp", 00:27:55.979 "traddr": "10.0.0.2", 00:27:55.979 "adrfam": "ipv4", 00:27:55.979 "trsvcid": "4420", 00:27:55.979 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:55.979 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:55.979 "hdgst": false, 00:27:55.979 "ddgst": false 00:27:55.979 }, 00:27:55.979 "method": "bdev_nvme_attach_controller" 00:27:55.979 }' 00:27:55.979 [2024-10-06 11:22:53.490829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.979 [2024-10-06 11:22:53.530109] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.349 Running I/O for 1 seconds... 00:27:58.720 2252.00 IOPS, 140.75 MiB/s 00:27:58.720 Latency(us) 00:27:58.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.720 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.720 Verification LBA range: start 0x0 length 0x400 00:27:58.720 Nvme1n1 : 1.13 291.42 18.21 0.00 0.00 214029.27 17725.93 191739.61 00:27:58.720 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.720 Verification LBA range: start 0x0 length 0x400 00:27:58.720 Nvme2n1 : 1.08 237.50 14.84 0.00 0.00 263313.80 18599.74 225693.50 00:27:58.720 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.720 Verification LBA range: start 0x0 length 0x400 00:27:58.720 Nvme3n1 : 1.14 281.26 17.58 0.00 0.00 219160.28 15791.06 196732.83 00:27:58.720 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.720 Verification LBA range: start 0x0 length 0x400 00:27:58.720 Nvme4n1 : 1.08 299.39 18.71 0.00 0.00 201337.98 8426.06 211712.49 00:27:58.720 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.721 Verification LBA range: start 0x0 length 0x400 00:27:58.721 Nvme5n1 : 1.14 279.56 17.47 0.00 0.00 214612.36 15791.06 227690.79 00:27:58.721 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.721 Verification LBA range: start 0x0 length 0x400 00:27:58.721 Nvme6n1 : 1.13 282.03 17.63 0.00 0.00 209332.71 18724.57 228689.43 00:27:58.721 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.721 Verification LBA range: start 0x0 length 0x400 00:27:58.721 Nvme7n1 : 1.13 283.66 17.73 0.00 0.00 205172.35 14979.66 207717.91 00:27:58.721 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.721 Verification LBA range: start 0x0 length 0x400 00:27:58.721 Nvme8n1 : 1.15 278.58 17.41 0.00 0.00 206128.96 15666.22 231685.36 00:27:58.721 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.721 Verification LBA range: start 0x0 length 0x400 00:27:58.721 Nvme9n1 : 1.15 277.28 17.33 0.00 0.00 204086.03 15416.56 225693.50 00:27:58.721 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.721 Verification LBA range: start 0x0 length 0x400 00:27:58.721 Nvme10n1 : 1.16 276.43 17.28 0.00 0.00 201685.87 12982.37 242670.45 00:27:58.721 =================================================================================================================== 00:27:58.721 Total : 2787.12 174.20 0.00 0.00 212865.49 8426.06 242670.45 00:27:58.721 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:27:58.721 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:58.721 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:58.721 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:58.721 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:58.721 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:58.721 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:27:58.721 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:58.721 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:27:58.721 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:58.721 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:58.721 rmmod nvme_tcp 00:27:58.721 rmmod nvme_fabrics 00:27:58.721 rmmod nvme_keyring 00:27:58.979 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:58.979 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:27:58.979 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:27:58.979 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 2160811 ']' 00:27:58.979 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 2160811 00:27:58.979 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2160811 ']' 00:27:58.979 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2160811 00:27:58.979 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:27:58.979 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:58.979 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2160811 00:27:58.979 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:58.979 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:58.979 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2160811' 00:27:58.979 killing process with pid 2160811 00:27:58.979 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2160811 00:27:58.979 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2160811 00:27:59.237 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:59.237 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:59.237 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:59.237 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:27:59.237 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:27:59.237 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:27:59.237 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:59.237 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:59.237 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:59.237 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.237 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:59.237 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.769 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:01.769 00:28:01.769 real 0m14.652s 00:28:01.769 user 0m33.413s 00:28:01.769 sys 0m5.454s 00:28:01.769 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:01.769 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:01.769 ************************************ 00:28:01.769 END TEST nvmf_shutdown_tc1 00:28:01.769 ************************************ 00:28:01.769 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:01.769 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:01.769 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:01.769 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:01.769 ************************************ 00:28:01.769 START TEST nvmf_shutdown_tc2 00:28:01.769 ************************************ 00:28:01.769 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:28:01.769 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:01.769 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:01.769 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:01.769 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:01.769 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:01.769 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:01.769 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:01.769 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.769 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:01.769 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:01.770 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:01.770 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:01.770 Found net devices under 0000:af:00.0: cvl_0_0 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:01.770 Found net devices under 0000:af:00.1: cvl_0_1 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:01.770 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:01.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:01.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:28:01.771 00:28:01.771 --- 10.0.0.2 ping statistics --- 00:28:01.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.771 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:01.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:28:01.771 00:28:01.771 --- 10.0.0.1 ping statistics --- 00:28:01.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.771 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2162373 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2162373 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2162373 ']' 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:01.771 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.771 [2024-10-06 11:22:59.213884] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:28:01.771 [2024-10-06 11:22:59.213928] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.771 [2024-10-06 11:22:59.273010] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:01.771 [2024-10-06 11:22:59.312035] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.771 [2024-10-06 11:22:59.312079] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.771 [2024-10-06 11:22:59.312086] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.771 [2024-10-06 11:22:59.312092] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.771 [2024-10-06 11:22:59.312097] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.771 [2024-10-06 11:22:59.313657] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:01.771 [2024-10-06 11:22:59.313749] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:01.771 [2024-10-06 11:22:59.313855] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.771 [2024-10-06 11:22:59.313856] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.029 [2024-10-06 11:22:59.460108] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:02.029 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.030 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:02.030 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.030 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:02.030 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.030 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:02.030 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:02.030 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.030 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.030 Malloc1 00:28:02.030 [2024-10-06 11:22:59.555383] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.030 Malloc2 00:28:02.287 Malloc3 00:28:02.287 Malloc4 00:28:02.287 Malloc5 00:28:02.287 Malloc6 00:28:02.287 Malloc7 00:28:02.287 Malloc8 00:28:02.546 Malloc9 00:28:02.546 Malloc10 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2162638 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2162638 /var/tmp/bdevperf.sock 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2162638 ']' 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:02.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:02.546 { 00:28:02.546 "params": { 00:28:02.546 "name": "Nvme$subsystem", 00:28:02.546 "trtype": "$TEST_TRANSPORT", 00:28:02.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.546 "adrfam": "ipv4", 00:28:02.546 "trsvcid": "$NVMF_PORT", 00:28:02.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.546 "hdgst": ${hdgst:-false}, 00:28:02.546 "ddgst": ${ddgst:-false} 00:28:02.546 }, 00:28:02.546 "method": "bdev_nvme_attach_controller" 00:28:02.546 } 00:28:02.546 EOF 00:28:02.546 )") 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:02.546 { 00:28:02.546 "params": { 00:28:02.546 "name": "Nvme$subsystem", 00:28:02.546 "trtype": "$TEST_TRANSPORT", 00:28:02.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.546 "adrfam": "ipv4", 00:28:02.546 "trsvcid": "$NVMF_PORT", 00:28:02.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.546 "hdgst": ${hdgst:-false}, 00:28:02.546 "ddgst": ${ddgst:-false} 00:28:02.546 }, 00:28:02.546 "method": "bdev_nvme_attach_controller" 00:28:02.546 } 00:28:02.546 EOF 00:28:02.546 )") 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:02.546 { 00:28:02.546 "params": { 00:28:02.546 "name": "Nvme$subsystem", 00:28:02.546 "trtype": "$TEST_TRANSPORT", 00:28:02.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.546 "adrfam": "ipv4", 00:28:02.546 "trsvcid": "$NVMF_PORT", 00:28:02.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.546 "hdgst": ${hdgst:-false}, 00:28:02.546 "ddgst": ${ddgst:-false} 00:28:02.546 }, 00:28:02.546 "method": "bdev_nvme_attach_controller" 00:28:02.546 } 00:28:02.546 EOF 00:28:02.546 )") 00:28:02.546 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:02.546 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:02.546 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:02.546 { 00:28:02.546 "params": { 00:28:02.546 "name": "Nvme$subsystem", 00:28:02.546 "trtype": "$TEST_TRANSPORT", 00:28:02.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.546 "adrfam": "ipv4", 00:28:02.546 "trsvcid": "$NVMF_PORT", 00:28:02.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.546 "hdgst": ${hdgst:-false}, 00:28:02.546 "ddgst": ${ddgst:-false} 00:28:02.546 }, 00:28:02.546 "method": "bdev_nvme_attach_controller" 00:28:02.546 } 00:28:02.546 EOF 00:28:02.546 )") 00:28:02.546 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:02.546 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:02.546 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:02.546 { 00:28:02.546 "params": { 00:28:02.546 "name": "Nvme$subsystem", 00:28:02.546 "trtype": "$TEST_TRANSPORT", 00:28:02.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.546 "adrfam": "ipv4", 00:28:02.546 "trsvcid": "$NVMF_PORT", 00:28:02.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.546 "hdgst": ${hdgst:-false}, 00:28:02.546 "ddgst": ${ddgst:-false} 00:28:02.546 }, 00:28:02.546 "method": "bdev_nvme_attach_controller" 00:28:02.546 } 00:28:02.546 EOF 00:28:02.546 )") 00:28:02.546 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:02.546 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:02.546 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:02.546 { 00:28:02.546 "params": { 00:28:02.546 "name": "Nvme$subsystem", 00:28:02.546 "trtype": "$TEST_TRANSPORT", 00:28:02.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.546 "adrfam": "ipv4", 00:28:02.546 "trsvcid": "$NVMF_PORT", 00:28:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.547 "hdgst": ${hdgst:-false}, 00:28:02.547 "ddgst": ${ddgst:-false} 00:28:02.547 }, 00:28:02.547 "method": "bdev_nvme_attach_controller" 00:28:02.547 } 00:28:02.547 EOF 00:28:02.547 )") 00:28:02.547 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:02.547 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:02.547 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:02.547 { 00:28:02.547 "params": { 00:28:02.547 "name": "Nvme$subsystem", 00:28:02.547 "trtype": "$TEST_TRANSPORT", 00:28:02.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.547 "adrfam": "ipv4", 00:28:02.547 "trsvcid": "$NVMF_PORT", 00:28:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.547 "hdgst": ${hdgst:-false}, 00:28:02.547 "ddgst": ${ddgst:-false} 00:28:02.547 }, 00:28:02.547 "method": "bdev_nvme_attach_controller" 00:28:02.547 } 00:28:02.547 EOF 00:28:02.547 )") 00:28:02.547 [2024-10-06 11:23:00.025989] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:28:02.547 [2024-10-06 11:23:00.026037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162638 ] 00:28:02.547 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:02.547 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:02.547 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:02.547 { 00:28:02.547 "params": { 00:28:02.547 "name": "Nvme$subsystem", 00:28:02.547 "trtype": "$TEST_TRANSPORT", 00:28:02.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.547 "adrfam": "ipv4", 00:28:02.547 "trsvcid": "$NVMF_PORT", 00:28:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.547 "hdgst": ${hdgst:-false}, 00:28:02.547 "ddgst": ${ddgst:-false} 00:28:02.547 }, 00:28:02.547 "method": "bdev_nvme_attach_controller" 00:28:02.547 } 00:28:02.547 EOF 00:28:02.547 )") 00:28:02.547 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:02.547 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:02.547 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:02.547 { 00:28:02.547 "params": { 00:28:02.547 "name": "Nvme$subsystem", 00:28:02.547 "trtype": "$TEST_TRANSPORT", 00:28:02.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.547 "adrfam": "ipv4", 00:28:02.547 "trsvcid": "$NVMF_PORT", 00:28:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.547 "hdgst": ${hdgst:-false}, 00:28:02.547 "ddgst": ${ddgst:-false} 00:28:02.547 }, 00:28:02.547 "method": "bdev_nvme_attach_controller" 00:28:02.547 } 00:28:02.547 EOF 00:28:02.547 )") 00:28:02.547 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:02.547 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:02.547 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:02.547 { 00:28:02.547 "params": { 00:28:02.547 "name": "Nvme$subsystem", 00:28:02.547 "trtype": "$TEST_TRANSPORT", 00:28:02.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.547 "adrfam": "ipv4", 00:28:02.547 "trsvcid": "$NVMF_PORT", 00:28:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.547 "hdgst": ${hdgst:-false}, 00:28:02.547 "ddgst": ${ddgst:-false} 00:28:02.547 }, 00:28:02.547 "method": "bdev_nvme_attach_controller" 00:28:02.547 } 00:28:02.547 EOF 00:28:02.547 )") 00:28:02.547 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:02.547 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:28:02.547 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:28:02.547 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:02.547 "params": { 00:28:02.547 "name": "Nvme1", 00:28:02.547 "trtype": "tcp", 00:28:02.547 "traddr": "10.0.0.2", 00:28:02.547 "adrfam": "ipv4", 00:28:02.547 "trsvcid": "4420", 00:28:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:02.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:02.547 "hdgst": false, 00:28:02.547 "ddgst": false 00:28:02.547 }, 00:28:02.547 "method": "bdev_nvme_attach_controller" 00:28:02.547 },{ 00:28:02.547 "params": { 00:28:02.547 "name": "Nvme2", 00:28:02.547 "trtype": "tcp", 00:28:02.547 "traddr": "10.0.0.2", 00:28:02.547 "adrfam": "ipv4", 00:28:02.547 "trsvcid": "4420", 00:28:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:02.547 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:02.547 "hdgst": false, 00:28:02.547 "ddgst": false 00:28:02.547 }, 00:28:02.547 "method": "bdev_nvme_attach_controller" 00:28:02.547 },{ 00:28:02.547 "params": { 00:28:02.547 "name": "Nvme3", 00:28:02.547 "trtype": "tcp", 00:28:02.547 "traddr": "10.0.0.2", 00:28:02.547 "adrfam": "ipv4", 00:28:02.547 "trsvcid": "4420", 00:28:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:02.547 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:02.547 "hdgst": false, 00:28:02.547 "ddgst": false 00:28:02.547 }, 00:28:02.547 "method": "bdev_nvme_attach_controller" 00:28:02.547 },{ 00:28:02.547 "params": { 00:28:02.547 "name": "Nvme4", 00:28:02.547 "trtype": "tcp", 00:28:02.547 "traddr": "10.0.0.2", 00:28:02.547 "adrfam": "ipv4", 00:28:02.547 "trsvcid": "4420", 00:28:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:02.547 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:02.547 "hdgst": false, 00:28:02.547 "ddgst": false 00:28:02.547 }, 00:28:02.547 "method": "bdev_nvme_attach_controller" 00:28:02.547 },{ 00:28:02.547 "params": { 00:28:02.547 "name": "Nvme5", 00:28:02.547 "trtype": "tcp", 00:28:02.547 "traddr": "10.0.0.2", 00:28:02.547 "adrfam": "ipv4", 00:28:02.547 "trsvcid": "4420", 00:28:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:02.547 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:02.547 "hdgst": false, 00:28:02.547 "ddgst": false 00:28:02.547 }, 00:28:02.547 "method": "bdev_nvme_attach_controller" 00:28:02.547 },{ 00:28:02.547 "params": { 00:28:02.547 "name": "Nvme6", 00:28:02.547 "trtype": "tcp", 00:28:02.547 "traddr": "10.0.0.2", 00:28:02.547 "adrfam": "ipv4", 00:28:02.547 "trsvcid": "4420", 00:28:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:02.547 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:02.547 "hdgst": false, 00:28:02.547 "ddgst": false 00:28:02.547 }, 00:28:02.547 "method": "bdev_nvme_attach_controller" 00:28:02.547 },{ 00:28:02.547 "params": { 00:28:02.547 "name": "Nvme7", 00:28:02.547 "trtype": "tcp", 00:28:02.547 "traddr": "10.0.0.2", 00:28:02.547 "adrfam": "ipv4", 00:28:02.547 "trsvcid": "4420", 00:28:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:02.547 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:02.547 "hdgst": false, 00:28:02.547 "ddgst": false 00:28:02.547 }, 00:28:02.547 "method": "bdev_nvme_attach_controller" 00:28:02.547 },{ 00:28:02.547 "params": { 00:28:02.547 "name": "Nvme8", 00:28:02.547 "trtype": "tcp", 00:28:02.547 "traddr": "10.0.0.2", 00:28:02.547 "adrfam": "ipv4", 00:28:02.547 "trsvcid": "4420", 00:28:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:02.547 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:02.547 "hdgst": false, 00:28:02.547 "ddgst": false 00:28:02.547 }, 00:28:02.547 "method": "bdev_nvme_attach_controller" 00:28:02.547 },{ 00:28:02.547 "params": { 00:28:02.547 "name": "Nvme9", 00:28:02.547 "trtype": "tcp", 00:28:02.547 "traddr": "10.0.0.2", 00:28:02.547 "adrfam": "ipv4", 00:28:02.547 "trsvcid": "4420", 00:28:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:02.547 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:02.547 "hdgst": false, 00:28:02.547 "ddgst": false 00:28:02.547 }, 00:28:02.547 "method": "bdev_nvme_attach_controller" 00:28:02.547 },{ 00:28:02.547 "params": { 00:28:02.547 "name": "Nvme10", 00:28:02.547 "trtype": "tcp", 00:28:02.547 "traddr": "10.0.0.2", 00:28:02.547 "adrfam": "ipv4", 00:28:02.547 "trsvcid": "4420", 00:28:02.547 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:02.547 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:02.547 "hdgst": false, 00:28:02.547 "ddgst": false 00:28:02.547 }, 00:28:02.547 "method": "bdev_nvme_attach_controller" 00:28:02.547 }' 00:28:02.547 [2024-10-06 11:23:00.084750] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.805 [2024-10-06 11:23:00.124897] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.701 Running I/O for 10 seconds... 00:28:04.701 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:04.701 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:04.701 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:04.701 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.701 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.701 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.701 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:04.701 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:04.701 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:04.701 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:04.701 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:04.701 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:04.701 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:04.701 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:04.701 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:04.701 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.701 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.701 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.701 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:04.702 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:04.702 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:04.959 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:04.959 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:04.959 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:04.959 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:04.959 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.959 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.959 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.959 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:04.959 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:04.959 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:05.217 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:05.217 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:05.217 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:05.217 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:05.217 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.217 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.217 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.217 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=136 00:28:05.217 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:28:05.217 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:05.217 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:05.217 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:05.217 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2162638 00:28:05.217 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2162638 ']' 00:28:05.217 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2162638 00:28:05.217 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:28:05.217 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:05.217 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2162638 00:28:05.475 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:05.475 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:05.475 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2162638' 00:28:05.475 killing process with pid 2162638 00:28:05.475 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2162638 00:28:05.475 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2162638 00:28:05.475 Received shutdown signal, test time was about 0.913275 seconds 00:28:05.475 00:28:05.475 Latency(us) 00:28:05.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.475 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.475 Verification LBA range: start 0x0 length 0x400 00:28:05.475 Nvme1n1 : 0.90 283.58 17.72 0.00 0.00 223288.08 18599.74 232684.01 00:28:05.475 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.475 Verification LBA range: start 0x0 length 0x400 00:28:05.475 Nvme2n1 : 0.90 284.86 17.80 0.00 0.00 218414.81 15978.30 205720.62 00:28:05.475 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.476 Verification LBA range: start 0x0 length 0x400 00:28:05.476 Nvme3n1 : 0.88 290.86 18.18 0.00 0.00 209769.81 15978.30 214708.42 00:28:05.476 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.476 Verification LBA range: start 0x0 length 0x400 00:28:05.476 Nvme4n1 : 0.88 290.36 18.15 0.00 0.00 206129.74 14792.41 203723.34 00:28:05.476 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.476 Verification LBA range: start 0x0 length 0x400 00:28:05.476 Nvme5n1 : 0.89 286.30 17.89 0.00 0.00 205347.72 15354.15 212711.13 00:28:05.476 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.476 Verification LBA range: start 0x0 length 0x400 00:28:05.476 Nvme6n1 : 0.89 287.15 17.95 0.00 0.00 201176.26 16227.96 216705.71 00:28:05.476 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.476 Verification LBA range: start 0x0 length 0x400 00:28:05.476 Nvme7n1 : 0.90 282.97 17.69 0.00 0.00 200533.46 15291.73 201726.05 00:28:05.476 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.476 Verification LBA range: start 0x0 length 0x400 00:28:05.476 Nvme8n1 : 0.91 281.70 17.61 0.00 0.00 197801.33 13731.35 219701.64 00:28:05.476 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.476 Verification LBA range: start 0x0 length 0x400 00:28:05.476 Nvme9n1 : 0.91 280.51 17.53 0.00 0.00 194436.63 8051.57 224694.86 00:28:05.476 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.476 Verification LBA range: start 0x0 length 0x400 00:28:05.476 Nvme10n1 : 0.88 219.41 13.71 0.00 0.00 242534.56 17850.76 240673.16 00:28:05.476 =================================================================================================================== 00:28:05.476 Total : 2787.68 174.23 0.00 0.00 209107.56 8051.57 240673.16 00:28:05.476 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2162373 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:06.847 rmmod nvme_tcp 00:28:06.847 rmmod nvme_fabrics 00:28:06.847 rmmod nvme_keyring 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 2162373 ']' 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 2162373 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2162373 ']' 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2162373 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2162373 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2162373' 00:28:06.847 killing process with pid 2162373 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2162373 00:28:06.847 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2162373 00:28:07.106 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:07.106 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:07.106 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:07.106 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:07.106 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:28:07.106 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:28:07.106 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:07.106 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:07.106 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:07.106 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.106 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.106 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:09.635 00:28:09.635 real 0m7.754s 00:28:09.635 user 0m23.755s 00:28:09.635 sys 0m1.384s 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.635 ************************************ 00:28:09.635 END TEST nvmf_shutdown_tc2 00:28:09.635 ************************************ 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:09.635 ************************************ 00:28:09.635 START TEST nvmf_shutdown_tc3 00:28:09.635 ************************************ 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.635 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:09.636 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:09.636 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:09.636 Found net devices under 0000:af:00.0: cvl_0_0 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:09.636 Found net devices under 0000:af:00.1: cvl_0_1 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.636 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.637 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:09.637 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.637 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.637 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.637 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:09.637 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:09.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:28:09.637 00:28:09.637 --- 10.0.0.2 ping statistics --- 00:28:09.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.637 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:28:09.637 11:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:28:09.637 00:28:09.637 --- 10.0.0.1 ping statistics --- 00:28:09.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.637 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=2163885 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 2163885 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2163885 ']' 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:09.637 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:09.637 [2024-10-06 11:23:07.105026] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:28:09.637 [2024-10-06 11:23:07.105082] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.637 [2024-10-06 11:23:07.164094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:09.637 [2024-10-06 11:23:07.202509] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.637 [2024-10-06 11:23:07.202550] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.637 [2024-10-06 11:23:07.202556] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.637 [2024-10-06 11:23:07.202562] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.637 [2024-10-06 11:23:07.202567] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.637 [2024-10-06 11:23:07.203942] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.637 [2024-10-06 11:23:07.204033] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:09.637 [2024-10-06 11:23:07.204149] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.637 [2024-10-06 11:23:07.204149] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:09.894 [2024-10-06 11:23:07.337593] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.894 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:09.894 Malloc1 00:28:09.894 [2024-10-06 11:23:07.432879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.894 Malloc2 00:28:10.151 Malloc3 00:28:10.151 Malloc4 00:28:10.151 Malloc5 00:28:10.151 Malloc6 00:28:10.151 Malloc7 00:28:10.151 Malloc8 00:28:10.409 Malloc9 00:28:10.409 Malloc10 00:28:10.409 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.409 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:10.409 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2164012 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2164012 /var/tmp/bdevperf.sock 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2164012 ']' 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:10.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:10.410 { 00:28:10.410 "params": { 00:28:10.410 "name": "Nvme$subsystem", 00:28:10.410 "trtype": "$TEST_TRANSPORT", 00:28:10.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.410 "adrfam": "ipv4", 00:28:10.410 "trsvcid": "$NVMF_PORT", 00:28:10.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.410 "hdgst": ${hdgst:-false}, 00:28:10.410 "ddgst": ${ddgst:-false} 00:28:10.410 }, 00:28:10.410 "method": "bdev_nvme_attach_controller" 00:28:10.410 } 00:28:10.410 EOF 00:28:10.410 )") 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:10.410 { 00:28:10.410 "params": { 00:28:10.410 "name": "Nvme$subsystem", 00:28:10.410 "trtype": "$TEST_TRANSPORT", 00:28:10.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.410 "adrfam": "ipv4", 00:28:10.410 "trsvcid": "$NVMF_PORT", 00:28:10.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.410 "hdgst": ${hdgst:-false}, 00:28:10.410 "ddgst": ${ddgst:-false} 00:28:10.410 }, 00:28:10.410 "method": "bdev_nvme_attach_controller" 00:28:10.410 } 00:28:10.410 EOF 00:28:10.410 )") 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:10.410 { 00:28:10.410 "params": { 00:28:10.410 "name": "Nvme$subsystem", 00:28:10.410 "trtype": "$TEST_TRANSPORT", 00:28:10.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.410 "adrfam": "ipv4", 00:28:10.410 "trsvcid": "$NVMF_PORT", 00:28:10.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.410 "hdgst": ${hdgst:-false}, 00:28:10.410 "ddgst": ${ddgst:-false} 00:28:10.410 }, 00:28:10.410 "method": "bdev_nvme_attach_controller" 00:28:10.410 } 00:28:10.410 EOF 00:28:10.410 )") 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:10.410 { 00:28:10.410 "params": { 00:28:10.410 "name": "Nvme$subsystem", 00:28:10.410 "trtype": "$TEST_TRANSPORT", 00:28:10.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.410 "adrfam": "ipv4", 00:28:10.410 "trsvcid": "$NVMF_PORT", 00:28:10.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.410 "hdgst": ${hdgst:-false}, 00:28:10.410 "ddgst": ${ddgst:-false} 00:28:10.410 }, 00:28:10.410 "method": "bdev_nvme_attach_controller" 00:28:10.410 } 00:28:10.410 EOF 00:28:10.410 )") 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:10.410 { 00:28:10.410 "params": { 00:28:10.410 "name": "Nvme$subsystem", 00:28:10.410 "trtype": "$TEST_TRANSPORT", 00:28:10.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.410 "adrfam": "ipv4", 00:28:10.410 "trsvcid": "$NVMF_PORT", 00:28:10.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.410 "hdgst": ${hdgst:-false}, 00:28:10.410 "ddgst": ${ddgst:-false} 00:28:10.410 }, 00:28:10.410 "method": "bdev_nvme_attach_controller" 00:28:10.410 } 00:28:10.410 EOF 00:28:10.410 )") 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:10.410 { 00:28:10.410 "params": { 00:28:10.410 "name": "Nvme$subsystem", 00:28:10.410 "trtype": "$TEST_TRANSPORT", 00:28:10.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.410 "adrfam": "ipv4", 00:28:10.410 "trsvcid": "$NVMF_PORT", 00:28:10.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.410 "hdgst": ${hdgst:-false}, 00:28:10.410 "ddgst": ${ddgst:-false} 00:28:10.410 }, 00:28:10.410 "method": "bdev_nvme_attach_controller" 00:28:10.410 } 00:28:10.410 EOF 00:28:10.410 )") 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:10.410 { 00:28:10.410 "params": { 00:28:10.410 "name": "Nvme$subsystem", 00:28:10.410 "trtype": "$TEST_TRANSPORT", 00:28:10.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.410 "adrfam": "ipv4", 00:28:10.410 "trsvcid": "$NVMF_PORT", 00:28:10.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.410 "hdgst": ${hdgst:-false}, 00:28:10.410 "ddgst": ${ddgst:-false} 00:28:10.410 }, 00:28:10.410 "method": "bdev_nvme_attach_controller" 00:28:10.410 } 00:28:10.410 EOF 00:28:10.410 )") 00:28:10.410 [2024-10-06 11:23:07.905320] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:28:10.410 [2024-10-06 11:23:07.905376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2164012 ] 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:10.410 { 00:28:10.410 "params": { 00:28:10.410 "name": "Nvme$subsystem", 00:28:10.410 "trtype": "$TEST_TRANSPORT", 00:28:10.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.410 "adrfam": "ipv4", 00:28:10.410 "trsvcid": "$NVMF_PORT", 00:28:10.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.410 "hdgst": ${hdgst:-false}, 00:28:10.410 "ddgst": ${ddgst:-false} 00:28:10.410 }, 00:28:10.410 "method": "bdev_nvme_attach_controller" 00:28:10.410 } 00:28:10.410 EOF 00:28:10.410 )") 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:10.410 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:10.410 { 00:28:10.410 "params": { 00:28:10.410 "name": "Nvme$subsystem", 00:28:10.410 "trtype": "$TEST_TRANSPORT", 00:28:10.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.410 "adrfam": "ipv4", 00:28:10.410 "trsvcid": "$NVMF_PORT", 00:28:10.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.411 "hdgst": ${hdgst:-false}, 00:28:10.411 "ddgst": ${ddgst:-false} 00:28:10.411 }, 00:28:10.411 "method": "bdev_nvme_attach_controller" 00:28:10.411 } 00:28:10.411 EOF 00:28:10.411 )") 00:28:10.411 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:10.411 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:10.411 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:10.411 { 00:28:10.411 "params": { 00:28:10.411 "name": "Nvme$subsystem", 00:28:10.411 "trtype": "$TEST_TRANSPORT", 00:28:10.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.411 "adrfam": "ipv4", 00:28:10.411 "trsvcid": "$NVMF_PORT", 00:28:10.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.411 "hdgst": ${hdgst:-false}, 00:28:10.411 "ddgst": ${ddgst:-false} 00:28:10.411 }, 00:28:10.411 "method": "bdev_nvme_attach_controller" 00:28:10.411 } 00:28:10.411 EOF 00:28:10.411 )") 00:28:10.411 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:10.411 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:28:10.411 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:28:10.411 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:10.411 "params": { 00:28:10.411 "name": "Nvme1", 00:28:10.411 "trtype": "tcp", 00:28:10.411 "traddr": "10.0.0.2", 00:28:10.411 "adrfam": "ipv4", 00:28:10.411 "trsvcid": "4420", 00:28:10.411 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:10.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:10.411 "hdgst": false, 00:28:10.411 "ddgst": false 00:28:10.411 }, 00:28:10.411 "method": "bdev_nvme_attach_controller" 00:28:10.411 },{ 00:28:10.411 "params": { 00:28:10.411 "name": "Nvme2", 00:28:10.411 "trtype": "tcp", 00:28:10.411 "traddr": "10.0.0.2", 00:28:10.411 "adrfam": "ipv4", 00:28:10.411 "trsvcid": "4420", 00:28:10.411 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:10.411 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:10.411 "hdgst": false, 00:28:10.411 "ddgst": false 00:28:10.411 }, 00:28:10.411 "method": "bdev_nvme_attach_controller" 00:28:10.411 },{ 00:28:10.411 "params": { 00:28:10.411 "name": "Nvme3", 00:28:10.411 "trtype": "tcp", 00:28:10.411 "traddr": "10.0.0.2", 00:28:10.411 "adrfam": "ipv4", 00:28:10.411 "trsvcid": "4420", 00:28:10.411 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:10.411 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:10.411 "hdgst": false, 00:28:10.411 "ddgst": false 00:28:10.411 }, 00:28:10.411 "method": "bdev_nvme_attach_controller" 00:28:10.411 },{ 00:28:10.411 "params": { 00:28:10.411 "name": "Nvme4", 00:28:10.411 "trtype": "tcp", 00:28:10.411 "traddr": "10.0.0.2", 00:28:10.411 "adrfam": "ipv4", 00:28:10.411 "trsvcid": "4420", 00:28:10.411 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:10.411 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:10.411 "hdgst": false, 00:28:10.411 "ddgst": false 00:28:10.411 }, 00:28:10.411 "method": "bdev_nvme_attach_controller" 00:28:10.411 },{ 00:28:10.411 "params": { 00:28:10.411 "name": "Nvme5", 00:28:10.411 "trtype": "tcp", 00:28:10.411 "traddr": "10.0.0.2", 00:28:10.411 "adrfam": "ipv4", 00:28:10.411 "trsvcid": "4420", 00:28:10.411 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:10.411 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:10.411 "hdgst": false, 00:28:10.411 "ddgst": false 00:28:10.411 }, 00:28:10.411 "method": "bdev_nvme_attach_controller" 00:28:10.411 },{ 00:28:10.411 "params": { 00:28:10.411 "name": "Nvme6", 00:28:10.411 "trtype": "tcp", 00:28:10.411 "traddr": "10.0.0.2", 00:28:10.411 "adrfam": "ipv4", 00:28:10.411 "trsvcid": "4420", 00:28:10.411 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:10.411 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:10.411 "hdgst": false, 00:28:10.411 "ddgst": false 00:28:10.411 }, 00:28:10.411 "method": "bdev_nvme_attach_controller" 00:28:10.411 },{ 00:28:10.411 "params": { 00:28:10.411 "name": "Nvme7", 00:28:10.411 "trtype": "tcp", 00:28:10.411 "traddr": "10.0.0.2", 00:28:10.411 "adrfam": "ipv4", 00:28:10.411 "trsvcid": "4420", 00:28:10.411 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:10.411 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:10.411 "hdgst": false, 00:28:10.411 "ddgst": false 00:28:10.411 }, 00:28:10.411 "method": "bdev_nvme_attach_controller" 00:28:10.411 },{ 00:28:10.411 "params": { 00:28:10.411 "name": "Nvme8", 00:28:10.411 "trtype": "tcp", 00:28:10.411 "traddr": "10.0.0.2", 00:28:10.411 "adrfam": "ipv4", 00:28:10.411 "trsvcid": "4420", 00:28:10.411 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:10.411 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:10.411 "hdgst": false, 00:28:10.411 "ddgst": false 00:28:10.411 }, 00:28:10.411 "method": "bdev_nvme_attach_controller" 00:28:10.411 },{ 00:28:10.411 "params": { 00:28:10.411 "name": "Nvme9", 00:28:10.411 "trtype": "tcp", 00:28:10.411 "traddr": "10.0.0.2", 00:28:10.411 "adrfam": "ipv4", 00:28:10.411 "trsvcid": "4420", 00:28:10.411 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:10.411 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:10.411 "hdgst": false, 00:28:10.411 "ddgst": false 00:28:10.411 }, 00:28:10.411 "method": "bdev_nvme_attach_controller" 00:28:10.411 },{ 00:28:10.411 "params": { 00:28:10.411 "name": "Nvme10", 00:28:10.411 "trtype": "tcp", 00:28:10.411 "traddr": "10.0.0.2", 00:28:10.411 "adrfam": "ipv4", 00:28:10.411 "trsvcid": "4420", 00:28:10.411 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:10.411 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:10.411 "hdgst": false, 00:28:10.411 "ddgst": false 00:28:10.411 }, 00:28:10.411 "method": "bdev_nvme_attach_controller" 00:28:10.411 }' 00:28:10.411 [2024-10-06 11:23:07.965708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.668 [2024-10-06 11:23:08.006257] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.038 Running I/O for 10 seconds... 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.295 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:12.296 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:12.296 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:12.552 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:12.552 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:12.552 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:12.552 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.552 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:12.552 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.824 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.824 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=135 00:28:12.824 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:28:12.824 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:12.824 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:12.824 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:12.824 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2163885 00:28:12.824 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2163885 ']' 00:28:12.824 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2163885 00:28:12.824 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:28:12.824 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:12.824 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2163885 00:28:12.824 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:12.824 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:12.824 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2163885' 00:28:12.824 killing process with pid 2163885 00:28:12.824 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2163885 00:28:12.824 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2163885 00:28:12.824 [2024-10-06 11:23:10.213141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.824 [2024-10-06 11:23:10.213189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.824 [2024-10-06 11:23:10.213197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.824 [2024-10-06 11:23:10.213204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.824 [2024-10-06 11:23:10.213210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.824 [2024-10-06 11:23:10.213217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.824 [2024-10-06 11:23:10.213223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.213577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7420 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.825 [2024-10-06 11:23:10.214814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.214986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9fd0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.215946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.215956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.215962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.215968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.215973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.215982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.215989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.215995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.826 [2024-10-06 11:23:10.216290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.216296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.216302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.216308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.216314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.216320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.216326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.216334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.216340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.216346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb78f0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.218907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82b0 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.219485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.219502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.219509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.219516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.219524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.219530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.219537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.219543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.219550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.827 [2024-10-06 11:23:10.219557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.219907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8780 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.828 [2024-10-06 11:23:10.220886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.220892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.220898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.220904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.220910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.220916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.220925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.220931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.220937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.220942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.220948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.220955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.220962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.220967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.220973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.220988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.220994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.221001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.221007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.221015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.221021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.221026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.221032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.221039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.221045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.221051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.221056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.221067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.221074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.221080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.221086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.221092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.221098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.221108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c70 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.829 [2024-10-06 11:23:10.222318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.222480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9140 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.223240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9630 is same with the state(6) to be set 00:28:12.830 [2024-10-06 11:23:10.227277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.830 [2024-10-06 11:23:10.227681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.830 [2024-10-06 11:23:10.227689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.227987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.227996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.228004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.228010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.228019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.228025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.228033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.228042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.228051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.228062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.228071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.228078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.228086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.228093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.228102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.228109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.228118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.228124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.228132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.228139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.228149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.228156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.228165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.228171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.228180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.228186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.228197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.228203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.228212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.228218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.228227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.228234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.228243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.228250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.228258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.831 [2024-10-06 11:23:10.228265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.831 [2024-10-06 11:23:10.228273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.832 [2024-10-06 11:23:10.228280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:12.832 [2024-10-06 11:23:10.228362] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1402090 was disconnected and freed. reset controller. 00:28:12.832 [2024-10-06 11:23:10.228540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120a850 is same with the state(6) to be set 00:28:12.832 [2024-10-06 11:23:10.228628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ad70 is same with the state(6) to be set 00:28:12.832 [2024-10-06 11:23:10.228714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e2280 is same with the state(6) to be set 00:28:12.832 [2024-10-06 11:23:10.228795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e2d50 is same with the state(6) to be set 00:28:12.832 [2024-10-06 11:23:10.228875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11de090 is same with the state(6) to be set 00:28:12.832 [2024-10-06 11:23:10.228963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.228987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.228995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.229002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.229010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.229018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.229025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda5d10 is same with the state(6) to be set 00:28:12.832 [2024-10-06 11:23:10.229048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.229064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.229072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.229079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.229087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.229096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.229103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.229110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.229121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9f460 is same with the state(6) to be set 00:28:12.832 [2024-10-06 11:23:10.229143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.229153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.229162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.229169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.229176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.229182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.229190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.832 [2024-10-06 11:23:10.229199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.832 [2024-10-06 11:23:10.229205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdad710 is same with the state(6) to be set 00:28:12.832 [2024-10-06 11:23:10.229226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.833 [2024-10-06 11:23:10.229235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.833 [2024-10-06 11:23:10.229249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.833 [2024-10-06 11:23:10.229265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.833 [2024-10-06 11:23:10.229286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda7230 is same with the state(6) to be set 00:28:12.833 [2024-10-06 11:23:10.229319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.833 [2024-10-06 11:23:10.229327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.833 [2024-10-06 11:23:10.229352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.833 [2024-10-06 11:23:10.229365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.833 [2024-10-06 11:23:10.229380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda7f00 is same with the state(6) to be set 00:28:12.833 [2024-10-06 11:23:10.229748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.229769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.229788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.229804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.229819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.229833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.229848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.229863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.229878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.229892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.229911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.229926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.229943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.229959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.229976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.229991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.229999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.230007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.230015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.230022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.230030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.230038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.230045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.230052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.230068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.230075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.230084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.230090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.230099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.230105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.230113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.230120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.230129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.230135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.230156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.230163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.230171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.833 [2024-10-06 11:23:10.230180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.833 [2024-10-06 11:23:10.230189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.230195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.230204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.230211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.230219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.230225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.230234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.240986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.240996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.241004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.241013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.241022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.241032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.241039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.241049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.241057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.241072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.241080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.241089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.241097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.241106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.241117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.834 [2024-10-06 11:23:10.241127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.834 [2024-10-06 11:23:10.241134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:12.835 [2024-10-06 11:23:10.241258] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1421e50 was disconnected and freed. reset controller. 00:28:12.835 [2024-10-06 11:23:10.241546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.241986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.241995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.242004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.242013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.242023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.242030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.242040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.242047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.242065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.242074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.242084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.242092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.242101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.242109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.242119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.242128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.242138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.242146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.242156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.242164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.242173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.242182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.242193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.242202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.242210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.835 [2024-10-06 11:23:10.242219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.835 [2024-10-06 11:23:10.242229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.242718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.242788] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11b8d40 was disconnected and freed. reset controller. 00:28:12.836 [2024-10-06 11:23:10.244044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x120a850 (9): Bad file descriptor 00:28:12.836 [2024-10-06 11:23:10.244085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121ad70 (9): Bad file descriptor 00:28:12.836 [2024-10-06 11:23:10.244099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e2280 (9): Bad file descriptor 00:28:12.836 [2024-10-06 11:23:10.244118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e2d50 (9): Bad file descriptor 00:28:12.836 [2024-10-06 11:23:10.244137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11de090 (9): Bad file descriptor 00:28:12.836 [2024-10-06 11:23:10.244151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda5d10 (9): Bad file descriptor 00:28:12.836 [2024-10-06 11:23:10.244166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9f460 (9): Bad file descriptor 00:28:12.836 [2024-10-06 11:23:10.244183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdad710 (9): Bad file descriptor 00:28:12.836 [2024-10-06 11:23:10.244199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda7230 (9): Bad file descriptor 00:28:12.836 [2024-10-06 11:23:10.244217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda7f00 (9): Bad file descriptor 00:28:12.836 [2024-10-06 11:23:10.246795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.246825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.246845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.246855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.246870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.246887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.246901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.246912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.246924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.836 [2024-10-06 11:23:10.246935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.836 [2024-10-06 11:23:10.246947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.246958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.246970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.246981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.246993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.837 [2024-10-06 11:23:10.247842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.837 [2024-10-06 11:23:10.247853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.247865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.247875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.247886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.247897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.247909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.247919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.247932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.247942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.247955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.247966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.247979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.247989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.248002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.248013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.248026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.248037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.248049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.248067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.248080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.248090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.248102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.248113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.248125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.248135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.248147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.248158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.248170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.248181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.248193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.248203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.248215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.248227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.248239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.248250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.248262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.248272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.248284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.248295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.248307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.838 [2024-10-06 11:23:10.248317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.838 [2024-10-06 11:23:10.248415] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14035a0 was disconnected and freed. reset controller. 00:28:12.838 [2024-10-06 11:23:10.248446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:12.838 [2024-10-06 11:23:10.248465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:12.838 [2024-10-06 11:23:10.250586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:12.838 [2024-10-06 11:23:10.250948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-10-06 11:23:10.250972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120a850 with addr=10.0.0.2, port=4420 00:28:12.838 [2024-10-06 11:23:10.250985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120a850 is same with the state(6) to be set 00:28:12.838 [2024-10-06 11:23:10.251256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-10-06 11:23:10.251274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda5d10 with addr=10.0.0.2, port=4420 00:28:12.838 [2024-10-06 11:23:10.251285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda5d10 is same with the state(6) to be set 00:28:12.838 [2024-10-06 11:23:10.252245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:12.838 [2024-10-06 11:23:10.252467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-10-06 11:23:10.252489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11de090 with addr=10.0.0.2, port=4420 00:28:12.838 [2024-10-06 11:23:10.252502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11de090 is same with the state(6) to be set 00:28:12.838 [2024-10-06 11:23:10.252517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x120a850 (9): Bad file descriptor 00:28:12.838 [2024-10-06 11:23:10.252533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda5d10 (9): Bad file descriptor 00:28:12.838 [2024-10-06 11:23:10.252588] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:12.838 [2024-10-06 11:23:10.252646] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:12.838 [2024-10-06 11:23:10.252711] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:12.838 [2024-10-06 11:23:10.252769] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:12.838 [2024-10-06 11:23:10.252823] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:12.838 [2024-10-06 11:23:10.252882] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:12.838 [2024-10-06 11:23:10.253442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.838 [2024-10-06 11:23:10.253464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121ad70 with addr=10.0.0.2, port=4420 00:28:12.838 [2024-10-06 11:23:10.253478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ad70 is same with the state(6) to be set 00:28:12.838 [2024-10-06 11:23:10.253493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11de090 (9): Bad file descriptor 00:28:12.838 [2024-10-06 11:23:10.253505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:12.838 [2024-10-06 11:23:10.253516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:12.838 [2024-10-06 11:23:10.253529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:12.838 [2024-10-06 11:23:10.253546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:12.838 [2024-10-06 11:23:10.253557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:12.839 [2024-10-06 11:23:10.253568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:12.839 [2024-10-06 11:23:10.253695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.839 [2024-10-06 11:23:10.253711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.839 [2024-10-06 11:23:10.253728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121ad70 (9): Bad file descriptor 00:28:12.839 [2024-10-06 11:23:10.253741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:12.839 [2024-10-06 11:23:10.253750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:12.839 [2024-10-06 11:23:10.253762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:12.839 [2024-10-06 11:23:10.253821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.839 [2024-10-06 11:23:10.253834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:12.839 [2024-10-06 11:23:10.253844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:12.839 [2024-10-06 11:23:10.253853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:12.839 [2024-10-06 11:23:10.253900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.839 [2024-10-06 11:23:10.254190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.839 [2024-10-06 11:23:10.254985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.839 [2024-10-06 11:23:10.254997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.255727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.255738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb00a0 is same with the state(6) to be set 00:28:12.840 [2024-10-06 11:23:10.257075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.257089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.257102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.257111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.257122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.257130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.257140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.257148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.257159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.257166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.257177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.257185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.257196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.840 [2024-10-06 11:23:10.257205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.840 [2024-10-06 11:23:10.257218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.841 [2024-10-06 11:23:10.257895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.841 [2024-10-06 11:23:10.257905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.257913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.257923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.257930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.257940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.257949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.257959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.257968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.257977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.257986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.257995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.258003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.258012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.258021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.258030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.258038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.258047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.258055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.258069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.258078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.258087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.258094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.258103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.258112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.258122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.258132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.258142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.258150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.258159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.258167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.258176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.258184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.258193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.258200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.258210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.258217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.258227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.258234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.258242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb12c0 is same with the state(6) to be set 00:28:12.842 [2024-10-06 11:23:10.259383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.842 [2024-10-06 11:23:10.259740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.842 [2024-10-06 11:23:10.259749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.259758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.259768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.259775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.259787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.259794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.259804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.259812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.259823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.259830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.259840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.259849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.259859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.259869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.259879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.259888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.259897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.259905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.259915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.259923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.259932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.259940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.259949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.259959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.259969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.259977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.259988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.259997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.843 [2024-10-06 11:23:10.260359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.843 [2024-10-06 11:23:10.260368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.260378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.260386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.260397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.260405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.260414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.260424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.260433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.260442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.260451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.260460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.260469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.260477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.260487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.260494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.260505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.260513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.260523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.260531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.260541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.260549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.260559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.260567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.260575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142f220 is same with the state(6) to be set 00:28:12.844 [2024-10-06 11:23:10.261706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.261722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.261735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.261743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.261753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.261761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.261772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.261784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.261795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.261804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.261814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.261822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.261831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.261840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.261850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.261859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.261868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.261877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.261887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.261895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.261905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.261914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.261923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.261932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.261942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.261950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.261960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.261968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.261979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.261987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.261997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.262005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.262018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.262027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.262037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.262046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.262055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.262071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.262080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.262089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.262099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.262108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.262117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.262125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.262135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.262143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.262153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.262161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.262172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.262179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.262190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.262197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.844 [2024-10-06 11:23:10.262207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.844 [2024-10-06 11:23:10.262217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.845 [2024-10-06 11:23:10.262891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.845 [2024-10-06 11:23:10.262900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b4db0 is same with the state(6) to be set 00:28:12.846 [2024-10-06 11:23:10.264043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.846 [2024-10-06 11:23:10.264754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.846 [2024-10-06 11:23:10.264764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.264771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.264782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.264792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.264803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.264812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.264822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.264830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.264840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.264848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.264857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.264866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.264875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.264883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.264892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.264900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.264909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.264918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.264926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.264934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.264944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.264952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.264960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.264969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.264980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.264988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.264997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.265005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.265016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.265024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.265034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.265041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.265050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.265062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.265072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.265080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.265089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.265096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.265107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.265115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.265126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.265133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.265143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.265151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.265160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.265167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.265176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.265184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.265192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.265200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.265209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.265218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.265226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b62e0 is same with the state(6) to be set 00:28:12.847 [2024-10-06 11:23:10.266370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.266388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.266400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.266408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.266422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.266429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.266440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.266449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.266461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.266470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.266479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.266488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.266498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.266506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.847 [2024-10-06 11:23:10.266515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.847 [2024-10-06 11:23:10.266524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.266990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.266997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.267006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.267013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.267022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.267029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.267037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.267046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.267054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.267066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.267074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.267081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.267090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.267097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.267105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.267112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.267120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.267127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.267135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.267142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.267151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.267158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.267167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.848 [2024-10-06 11:23:10.267173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.848 [2024-10-06 11:23:10.267182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-10-06 11:23:10.267189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.849 [2024-10-06 11:23:10.267198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-10-06 11:23:10.267206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.849 [2024-10-06 11:23:10.267214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-10-06 11:23:10.267221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.849 [2024-10-06 11:23:10.267230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-10-06 11:23:10.267239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.849 [2024-10-06 11:23:10.267250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-10-06 11:23:10.267257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.849 [2024-10-06 11:23:10.267267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-10-06 11:23:10.267275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.849 [2024-10-06 11:23:10.267284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-10-06 11:23:10.267291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.849 [2024-10-06 11:23:10.267299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-10-06 11:23:10.267306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.849 [2024-10-06 11:23:10.267315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-10-06 11:23:10.267322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.849 [2024-10-06 11:23:10.267330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-10-06 11:23:10.267338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.849 [2024-10-06 11:23:10.267346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-10-06 11:23:10.267353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.849 [2024-10-06 11:23:10.267361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-10-06 11:23:10.267369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.849 [2024-10-06 11:23:10.267377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-10-06 11:23:10.267385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.849 [2024-10-06 11:23:10.267393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-10-06 11:23:10.267401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.849 [2024-10-06 11:23:10.267409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-10-06 11:23:10.267417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.849 [2024-10-06 11:23:10.267425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-10-06 11:23:10.267431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.849 [2024-10-06 11:23:10.267441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-10-06 11:23:10.267450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.849 [2024-10-06 11:23:10.267458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.849 [2024-10-06 11:23:10.267465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.849 [2024-10-06 11:23:10.267472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b7810 is same with the state(6) to be set 00:28:12.849 [2024-10-06 11:23:10.268486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:12.849 [2024-10-06 11:23:10.268507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:12.849 [2024-10-06 11:23:10.268517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:12.849 [2024-10-06 11:23:10.268526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:12.849 [2024-10-06 11:23:10.268602] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.849 [2024-10-06 11:23:10.268620] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.849 [2024-10-06 11:23:10.268681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:12.849 task offset: 24576 on job bdev=Nvme9n1 fails 00:28:12.849 00:28:12.849 Latency(us) 00:28:12.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.849 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.849 Job: Nvme1n1 ended in about 0.97 seconds with error 00:28:12.849 Verification LBA range: start 0x0 length 0x400 00:28:12.849 Nvme1n1 : 0.97 198.58 12.41 66.19 0.00 239525.79 18350.08 214708.42 00:28:12.849 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.849 Job: Nvme2n1 ended in about 0.97 seconds with error 00:28:12.849 Verification LBA range: start 0x0 length 0x400 00:28:12.849 Nvme2n1 : 0.97 198.09 12.38 66.03 0.00 236290.44 16103.13 219701.64 00:28:12.849 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.849 Job: Nvme3n1 ended in about 0.96 seconds with error 00:28:12.849 Verification LBA range: start 0x0 length 0x400 00:28:12.849 Nvme3n1 : 0.96 267.96 16.75 66.99 0.00 183103.49 12607.88 214708.42 00:28:12.849 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.849 Job: Nvme4n1 ended in about 0.97 seconds with error 00:28:12.849 Verification LBA range: start 0x0 length 0x400 00:28:12.849 Nvme4n1 : 0.97 203.79 12.74 65.87 0.00 223826.76 13606.52 193736.90 00:28:12.849 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.849 Job: Nvme5n1 ended in about 0.97 seconds with error 00:28:12.849 Verification LBA range: start 0x0 length 0x400 00:28:12.849 Nvme5n1 : 0.97 197.15 12.32 65.72 0.00 225878.06 16976.94 215707.06 00:28:12.849 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.849 Job: Nvme6n1 ended in about 0.98 seconds with error 00:28:12.849 Verification LBA range: start 0x0 length 0x400 00:28:12.849 Nvme6n1 : 0.98 196.68 12.29 65.56 0.00 222599.07 29584.82 220700.28 00:28:12.849 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.849 Job: Nvme7n1 ended in about 0.98 seconds with error 00:28:12.849 Verification LBA range: start 0x0 length 0x400 00:28:12.849 Nvme7n1 : 0.98 196.23 12.26 65.41 0.00 219285.94 17601.10 221698.93 00:28:12.849 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.849 Job: Nvme8n1 ended in about 0.96 seconds with error 00:28:12.849 Verification LBA range: start 0x0 length 0x400 00:28:12.849 Nvme8n1 : 0.96 267.61 16.73 66.90 0.00 167898.45 15104.49 211712.49 00:28:12.849 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.849 Job: Nvme9n1 ended in about 0.95 seconds with error 00:28:12.849 Verification LBA range: start 0x0 length 0x400 00:28:12.849 Nvme9n1 : 0.95 201.30 12.58 67.10 0.00 205323.70 21595.67 241671.80 00:28:12.849 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.849 Job: Nvme10n1 ended in about 0.96 seconds with error 00:28:12.849 Verification LBA range: start 0x0 length 0x400 00:28:12.849 Nvme10n1 : 0.96 199.94 12.50 66.65 0.00 203163.55 5523.75 236678.58 00:28:12.849 =================================================================================================================== 00:28:12.849 Total : 2127.32 132.96 662.42 0.00 210947.39 5523.75 241671.80 00:28:12.849 [2024-10-06 11:23:10.299986] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:12.849 [2024-10-06 11:23:10.300035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:12.849 [2024-10-06 11:23:10.300428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.849 [2024-10-06 11:23:10.300448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdad710 with addr=10.0.0.2, port=4420 00:28:12.849 [2024-10-06 11:23:10.300459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdad710 is same with the state(6) to be set 00:28:12.849 [2024-10-06 11:23:10.300648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.849 [2024-10-06 11:23:10.300661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9f460 with addr=10.0.0.2, port=4420 00:28:12.849 [2024-10-06 11:23:10.300669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9f460 is same with the state(6) to be set 00:28:12.849 [2024-10-06 11:23:10.300907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.849 [2024-10-06 11:23:10.300920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda7230 with addr=10.0.0.2, port=4420 00:28:12.849 [2024-10-06 11:23:10.300928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda7230 is same with the state(6) to be set 00:28:12.849 [2024-10-06 11:23:10.301142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.850 [2024-10-06 11:23:10.301155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e2d50 with addr=10.0.0.2, port=4420 00:28:12.850 [2024-10-06 11:23:10.301162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e2d50 is same with the state(6) to be set 00:28:12.850 [2024-10-06 11:23:10.302466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:12.850 [2024-10-06 11:23:10.302485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:12.850 [2024-10-06 11:23:10.302494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:12.850 [2024-10-06 11:23:10.302503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:12.850 [2024-10-06 11:23:10.302747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.850 [2024-10-06 11:23:10.302763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e2280 with addr=10.0.0.2, port=4420 00:28:12.850 [2024-10-06 11:23:10.302771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e2280 is same with the state(6) to be set 00:28:12.850 [2024-10-06 11:23:10.303006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.850 [2024-10-06 11:23:10.303019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda7f00 with addr=10.0.0.2, port=4420 00:28:12.850 [2024-10-06 11:23:10.303027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda7f00 is same with the state(6) to be set 00:28:12.850 [2024-10-06 11:23:10.303047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdad710 (9): Bad file descriptor 00:28:12.850 [2024-10-06 11:23:10.303063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9f460 (9): Bad file descriptor 00:28:12.850 [2024-10-06 11:23:10.303073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda7230 (9): Bad file descriptor 00:28:12.850 [2024-10-06 11:23:10.303081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e2d50 (9): Bad file descriptor 00:28:12.850 [2024-10-06 11:23:10.303116] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.850 [2024-10-06 11:23:10.303127] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.850 [2024-10-06 11:23:10.303137] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.850 [2024-10-06 11:23:10.303146] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.850 [2024-10-06 11:23:10.303409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.850 [2024-10-06 11:23:10.303423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda5d10 with addr=10.0.0.2, port=4420 00:28:12.850 [2024-10-06 11:23:10.303431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda5d10 is same with the state(6) to be set 00:28:12.850 [2024-10-06 11:23:10.303672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.850 [2024-10-06 11:23:10.303684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120a850 with addr=10.0.0.2, port=4420 00:28:12.850 [2024-10-06 11:23:10.303691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120a850 is same with the state(6) to be set 00:28:12.850 [2024-10-06 11:23:10.303928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.850 [2024-10-06 11:23:10.303941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11de090 with addr=10.0.0.2, port=4420 00:28:12.850 [2024-10-06 11:23:10.303948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11de090 is same with the state(6) to be set 00:28:12.850 [2024-10-06 11:23:10.304223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.850 [2024-10-06 11:23:10.304236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121ad70 with addr=10.0.0.2, port=4420 00:28:12.850 [2024-10-06 11:23:10.304244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ad70 is same with the state(6) to be set 00:28:12.850 [2024-10-06 11:23:10.304253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e2280 (9): Bad file descriptor 00:28:12.850 [2024-10-06 11:23:10.304263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda7f00 (9): Bad file descriptor 00:28:12.850 [2024-10-06 11:23:10.304272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:12.850 [2024-10-06 11:23:10.304278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:12.850 [2024-10-06 11:23:10.304286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:12.850 [2024-10-06 11:23:10.304298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:12.850 [2024-10-06 11:23:10.304305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:12.850 [2024-10-06 11:23:10.304313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:12.850 [2024-10-06 11:23:10.304322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:12.850 [2024-10-06 11:23:10.304328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:12.850 [2024-10-06 11:23:10.304338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:12.850 [2024-10-06 11:23:10.304347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:12.850 [2024-10-06 11:23:10.304353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:12.850 [2024-10-06 11:23:10.304361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:12.850 [2024-10-06 11:23:10.304429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.850 [2024-10-06 11:23:10.304439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.850 [2024-10-06 11:23:10.304444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.850 [2024-10-06 11:23:10.304450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.850 [2024-10-06 11:23:10.304457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda5d10 (9): Bad file descriptor 00:28:12.850 [2024-10-06 11:23:10.304467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x120a850 (9): Bad file descriptor 00:28:12.850 [2024-10-06 11:23:10.304475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11de090 (9): Bad file descriptor 00:28:12.850 [2024-10-06 11:23:10.304484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121ad70 (9): Bad file descriptor 00:28:12.850 [2024-10-06 11:23:10.304491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:12.850 [2024-10-06 11:23:10.304498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:12.850 [2024-10-06 11:23:10.304505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:12.850 [2024-10-06 11:23:10.304514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:12.850 [2024-10-06 11:23:10.304521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:12.850 [2024-10-06 11:23:10.304527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:12.850 [2024-10-06 11:23:10.304550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.850 [2024-10-06 11:23:10.304557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.850 [2024-10-06 11:23:10.304563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:12.850 [2024-10-06 11:23:10.304570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:12.850 [2024-10-06 11:23:10.304577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:12.850 [2024-10-06 11:23:10.304585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:12.850 [2024-10-06 11:23:10.304590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:12.850 [2024-10-06 11:23:10.304597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:12.850 [2024-10-06 11:23:10.304605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:12.850 [2024-10-06 11:23:10.304613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:12.850 [2024-10-06 11:23:10.304619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:12.850 [2024-10-06 11:23:10.304629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:12.850 [2024-10-06 11:23:10.304638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:12.850 [2024-10-06 11:23:10.304645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:12.850 [2024-10-06 11:23:10.304669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.850 [2024-10-06 11:23:10.304676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.850 [2024-10-06 11:23:10.304682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.850 [2024-10-06 11:23:10.304688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.109 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2164012 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2164012 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 2164012 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:14.484 rmmod nvme_tcp 00:28:14.484 rmmod nvme_fabrics 00:28:14.484 rmmod nvme_keyring 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 2163885 ']' 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 2163885 00:28:14.484 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2163885 ']' 00:28:14.485 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2163885 00:28:14.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2163885) - No such process 00:28:14.485 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 2163885 is not found' 00:28:14.485 Process with pid 2163885 is not found 00:28:14.485 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:14.485 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:14.485 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:14.485 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:14.485 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:28:14.485 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:28:14.485 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:14.485 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:14.485 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:14.485 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.485 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.485 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:16.386 00:28:16.386 real 0m7.063s 00:28:16.386 user 0m15.996s 00:28:16.386 sys 0m1.331s 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:16.386 ************************************ 00:28:16.386 END TEST nvmf_shutdown_tc3 00:28:16.386 ************************************ 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:16.386 ************************************ 00:28:16.386 START TEST nvmf_shutdown_tc4 00:28:16.386 ************************************ 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:16.386 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:16.387 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:16.387 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:16.387 Found net devices under 0000:af:00.0: cvl_0_0 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:16.387 Found net devices under 0000:af:00.1: cvl_0_1 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:16.387 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:16.388 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:16.388 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:16.388 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:16.388 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.388 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:16.388 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:16.388 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:16.388 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:16.646 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:16.646 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:16.646 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:16.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:16.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:28:16.646 00:28:16.646 --- 10.0.0.2 ping statistics --- 00:28:16.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.646 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:16.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:16.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:28:16.646 00:28:16.646 --- 10.0.0.1 ping statistics --- 00:28:16.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.646 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=2165169 00:28:16.646 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 2165169 00:28:16.647 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 2165169 ']' 00:28:16.647 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.647 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:16.647 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.647 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:16.647 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:16.647 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:16.647 [2024-10-06 11:23:14.177477] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:28:16.647 [2024-10-06 11:23:14.177521] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:16.905 [2024-10-06 11:23:14.235177] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:16.905 [2024-10-06 11:23:14.273489] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:16.905 [2024-10-06 11:23:14.273530] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:16.905 [2024-10-06 11:23:14.273537] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:16.905 [2024-10-06 11:23:14.273543] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:16.905 [2024-10-06 11:23:14.273548] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:16.905 [2024-10-06 11:23:14.275084] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:16.905 [2024-10-06 11:23:14.275175] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:16.905 [2024-10-06 11:23:14.275283] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.905 [2024-10-06 11:23:14.275283] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:16.905 [2024-10-06 11:23:14.418001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.905 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:17.162 Malloc1 00:28:17.162 [2024-10-06 11:23:14.517468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.162 Malloc2 00:28:17.162 Malloc3 00:28:17.162 Malloc4 00:28:17.162 Malloc5 00:28:17.162 Malloc6 00:28:17.421 Malloc7 00:28:17.421 Malloc8 00:28:17.421 Malloc9 00:28:17.421 Malloc10 00:28:17.421 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.421 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:17.421 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:17.421 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:17.421 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2165316 00:28:17.421 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:17.421 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:17.678 [2024-10-06 11:23:15.002704] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:22.954 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:22.954 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2165169 00:28:22.954 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 2165169 ']' 00:28:22.954 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 2165169 00:28:22.954 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:28:22.954 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:22.954 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2165169 00:28:22.955 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:22.955 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:22.955 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2165169' 00:28:22.955 killing process with pid 2165169 00:28:22.955 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 2165169 00:28:22.955 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 2165169 00:28:22.955 [2024-10-06 11:23:20.017468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222b240 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.017522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222b240 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.017531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222b240 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.017539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222b240 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.017546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222b240 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.017552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222b240 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.017923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222b710 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.017953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222b710 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.017961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222b710 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.017968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222b710 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.017975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222b710 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.017982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222b710 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.017989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222b710 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.017995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222b710 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.019012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222a880 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.019038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222a880 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.019046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222a880 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.019054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222a880 is same with the state(6) to be set 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 [2024-10-06 11:23:20.028602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ed30 is same with the state(6) to be set 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 [2024-10-06 11:23:20.028632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ed30 is same with Write completed with error (sct=0, sc=8) 00:28:22.955 the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.028644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ed30 is same with the state(6) to be set 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 [2024-10-06 11:23:20.028652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ed30 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.028659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ed30 is same with the state(6) to be set 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 [2024-10-06 11:23:20.028667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ed30 is same with the state(6) to be set 00:28:22.955 starting I/O failed: -6 00:28:22.955 [2024-10-06 11:23:20.028673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ed30 is same with the state(6) to be set 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 [2024-10-06 11:23:20.029171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f200 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.029197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f200 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.029206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f200 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.029177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such devi[2024-10-06 11:23:20.029214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f200 is same with ce or address) on qpair id 1 00:28:22.955 the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.029225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f200 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.029232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f200 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.029238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f200 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.029245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f200 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.029251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f200 is same with the state(6) to be set 00:28:22.955 [2024-10-06 11:23:20.029258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f200 is same with the state(6) to be set 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 starting I/O failed: -6 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 Write completed with error (sct=0, sc=8) 00:28:22.955 [2024-10-06 11:23:20.029964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f6d0 is same with starting I/O failed: -6 00:28:22.955 the state(6) to be set 00:28:22.956 [2024-10-06 11:23:20.029987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f6d0 is same with the state(6) to be set 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 [2024-10-06 11:23:20.029995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f6d0 is same with the state(6) to be set 00:28:22.956 [2024-10-06 11:23:20.030002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f6d0 is same with the state(6) to be set 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 [2024-10-06 11:23:20.030009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f6d0 is same with starting I/O failed: -6 00:28:22.956 the state(6) to be set 00:28:22.956 [2024-10-06 11:23:20.030017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f6d0 is same with the state(6) to be set 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 [2024-10-06 11:23:20.030028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f6d0 is same with the state(6) to be set 00:28:22.956 [2024-10-06 11:23:20.030035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f6d0 is same with the state(6) to be set 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 [2024-10-06 11:23:20.030042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f6d0 is same with the state(6) to be set 00:28:22.956 starting I/O failed: -6 00:28:22.956 [2024-10-06 11:23:20.030049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f6d0 is same with the state(6) to be set 00:28:22.956 [2024-10-06 11:23:20.030055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f6d0 is same with the state(6) to be set 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 [2024-10-06 11:23:20.030068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f6d0 is same with the state(6) to be set 00:28:22.956 [2024-10-06 11:23:20.030076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f6d0 is same with the state(6) to be set 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 [2024-10-06 11:23:20.030111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 [2024-10-06 11:23:20.030475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e860 is same with Write completed with error (sct=0, sc=8) 00:28:22.956 the state(6) to be set 00:28:22.956 starting I/O failed: -6 00:28:22.956 [2024-10-06 11:23:20.030499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e860 is same with the state(6) to be set 00:28:22.956 [2024-10-06 11:23:20.030507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e860 is same with the state(6) to be set 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 [2024-10-06 11:23:20.030514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e860 is same with the state(6) to be set 00:28:22.956 [2024-10-06 11:23:20.030520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e860 is same with the state(6) to be set 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 [2024-10-06 11:23:20.030526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e860 is same with the state(6) to be set 00:28:22.956 starting I/O failed: -6 00:28:22.956 [2024-10-06 11:23:20.030533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e860 is same with the state(6) to be set 00:28:22.956 [2024-10-06 11:23:20.030539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e860 is same with the state(6) to be set 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 [2024-10-06 11:23:20.030546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e860 is same with the state(6) to be set 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 [2024-10-06 11:23:20.031167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.956 starting I/O failed: -6 00:28:22.956 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 [2024-10-06 11:23:20.032709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:22.957 NVMe io qpair process completion error 00:28:22.957 [2024-10-06 11:23:20.033292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22413b0 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.033314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22413b0 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.033322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22413b0 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.033329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22413b0 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.033337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22413b0 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.033343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22413b0 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.033349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22413b0 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.033356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22413b0 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.033362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22413b0 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.033368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22413b0 is same with the state(6) to be set 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 [2024-10-06 11:23:20.033741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213290 is same with the state(6) to be set 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 [2024-10-06 11:23:20.033759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213290 is same with the state(6) to be set 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 [2024-10-06 11:23:20.033766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213290 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.033774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213290 is same with the state(6) to be set 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 [2024-10-06 11:23:20.033781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213290 is same with the state(6) to be set 00:28:22.957 starting I/O failed: -6 00:28:22.957 [2024-10-06 11:23:20.033788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213290 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.033794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213290 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.033800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213290 is same with Write completed with error (sct=0, sc=8) 00:28:22.957 the state(6) to be set 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 [2024-10-06 11:23:20.033959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.957 starting I/O failed: -6 00:28:22.957 [2024-10-06 11:23:20.034081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213760 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.034099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213760 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.034106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213760 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.034112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213760 is same with the state(6) to be set 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 [2024-10-06 11:23:20.034118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213760 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.034125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213760 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.034131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213760 is same with the state(6) to be set 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 [2024-10-06 11:23:20.034137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213760 is same with the state(6) to be set 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 [2024-10-06 11:23:20.034412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240ee0 is same with Write completed with error (sct=0, sc=8) 00:28:22.957 the state(6) to be set 00:28:22.957 starting I/O failed: -6 00:28:22.957 [2024-10-06 11:23:20.034430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240ee0 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.034437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240ee0 is same with the state(6) to be set 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 [2024-10-06 11:23:20.034444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240ee0 is same with the state(6) to be set 00:28:22.957 starting I/O failed: -6 00:28:22.957 [2024-10-06 11:23:20.034451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240ee0 is same with the state(6) to be set 00:28:22.957 [2024-10-06 11:23:20.034457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240ee0 is same with the state(6) to be set 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.957 Write completed with error (sct=0, sc=8) 00:28:22.957 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 [2024-10-06 11:23:20.034861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 [2024-10-06 11:23:20.035896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 [2024-10-06 11:23:20.036242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223fba0 is same with the state(6) to be set 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 [2024-10-06 11:23:20.036258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223fba0 is same with the state(6) to be set 00:28:22.958 [2024-10-06 11:23:20.036265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223fba0 is same with the state(6) to be set 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 [2024-10-06 11:23:20.036272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223fba0 is same with the state(6) to be set 00:28:22.958 starting I/O failed: -6 00:28:22.958 [2024-10-06 11:23:20.036280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223fba0 is same with the state(6) to be set 00:28:22.958 [2024-10-06 11:23:20.036287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223fba0 is same with the state(6) to be set 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.958 starting I/O failed: -6 00:28:22.958 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 [2024-10-06 11:23:20.038075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.959 NVMe io qpair process completion error 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 [2024-10-06 11:23:20.039241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 [2024-10-06 11:23:20.040192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.959 starting I/O failed: -6 00:28:22.959 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 [2024-10-06 11:23:20.041363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 [2024-10-06 11:23:20.043530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.960 NVMe io qpair process completion error 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.960 starting I/O failed: -6 00:28:22.960 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 [2024-10-06 11:23:20.044477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 [2024-10-06 11:23:20.045376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 [2024-10-06 11:23:20.046412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.961 Write completed with error (sct=0, sc=8) 00:28:22.961 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 [2024-10-06 11:23:20.048276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:22.962 NVMe io qpair process completion error 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 starting I/O failed: -6 00:28:22.962 starting I/O failed: -6 00:28:22.962 starting I/O failed: -6 00:28:22.962 starting I/O failed: -6 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.962 Write completed with error (sct=0, sc=8) 00:28:22.962 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 [2024-10-06 11:23:20.051177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 [2024-10-06 11:23:20.055231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:22.963 NVMe io qpair process completion error 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.963 starting I/O failed: -6 00:28:22.963 Write completed with error (sct=0, sc=8) 00:28:22.964 [2024-10-06 11:23:20.056108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 [2024-10-06 11:23:20.056975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 [2024-10-06 11:23:20.057996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.964 Write completed with error (sct=0, sc=8) 00:28:22.964 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 [2024-10-06 11:23:20.062016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:22.965 NVMe io qpair process completion error 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 [2024-10-06 11:23:20.064072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.965 starting I/O failed: -6 00:28:22.965 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 [2024-10-06 11:23:20.065086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 [2024-10-06 11:23:20.067085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.966 NVMe io qpair process completion error 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 starting I/O failed: -6 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.966 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 [2024-10-06 11:23:20.068103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 [2024-10-06 11:23:20.068929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 [2024-10-06 11:23:20.069940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.967 Write completed with error (sct=0, sc=8) 00:28:22.967 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 [2024-10-06 11:23:20.076251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.968 NVMe io qpair process completion error 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 [2024-10-06 11:23:20.077367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 [2024-10-06 11:23:20.078241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.968 starting I/O failed: -6 00:28:22.968 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 [2024-10-06 11:23:20.079370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 [2024-10-06 11:23:20.081992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.969 NVMe io qpair process completion error 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.969 starting I/O failed: -6 00:28:22.969 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 [2024-10-06 11:23:20.083123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 [2024-10-06 11:23:20.083994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 [2024-10-06 11:23:20.084992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.970 Write completed with error (sct=0, sc=8) 00:28:22.970 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 Write completed with error (sct=0, sc=8) 00:28:22.971 starting I/O failed: -6 00:28:22.971 [2024-10-06 11:23:20.086785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:22.971 NVMe io qpair process completion error 00:28:22.971 Initializing NVMe Controllers 00:28:22.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:28:22.971 Controller IO queue size 128, less than required. 00:28:22.971 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:22.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:28:22.971 Controller IO queue size 128, less than required. 00:28:22.971 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:22.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:28:22.971 Controller IO queue size 128, less than required. 00:28:22.971 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:22.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:28:22.971 Controller IO queue size 128, less than required. 00:28:22.971 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:22.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:22.971 Controller IO queue size 128, less than required. 00:28:22.971 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:22.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:28:22.971 Controller IO queue size 128, less than required. 00:28:22.971 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:22.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:28:22.971 Controller IO queue size 128, less than required. 00:28:22.971 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:22.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:28:22.971 Controller IO queue size 128, less than required. 00:28:22.971 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:22.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:28:22.971 Controller IO queue size 128, less than required. 00:28:22.971 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:22.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:28:22.971 Controller IO queue size 128, less than required. 00:28:22.971 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:22.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:22.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:22.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:22.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:22.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:22.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:22.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:22.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:22.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:22.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:22.971 Initialization complete. Launching workers. 00:28:22.971 ======================================================== 00:28:22.971 Latency(us) 00:28:22.971 Device Information : IOPS MiB/s Average min max 00:28:22.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2218.16 95.31 57711.21 711.84 110698.72 00:28:22.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2183.14 93.81 58652.69 503.04 107639.01 00:28:22.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2166.60 93.10 59122.93 748.68 107655.88 00:28:22.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2221.60 95.46 57677.00 823.89 106380.68 00:28:22.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2204.41 94.72 58140.51 683.16 105097.81 00:28:22.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2201.83 94.61 58258.33 814.85 102967.48 00:28:22.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2185.08 93.89 58739.52 537.66 113749.28 00:28:22.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2190.02 94.10 58625.22 828.32 99733.52 00:28:22.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2204.20 94.71 58318.86 496.21 102250.23 00:28:22.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2192.60 94.21 58651.58 846.38 126463.16 00:28:22.971 ======================================================== 00:28:22.971 Total : 21967.63 943.92 58386.70 496.21 126463.16 00:28:22.971 00:28:22.971 [2024-10-06 11:23:20.089751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e07d0 is same with the state(6) to be set 00:28:22.971 [2024-10-06 11:23:20.089796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0170 is same with the state(6) to be set 00:28:22.971 [2024-10-06 11:23:20.089826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e04a0 is same with the state(6) to be set 00:28:22.971 [2024-10-06 11:23:20.089854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61180 is same with the state(6) to be set 00:28:22.971 [2024-10-06 11:23:20.089882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19dfe40 is same with the state(6) to be set 00:28:22.971 [2024-10-06 11:23:20.089910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c280 is same with the state(6) to be set 00:28:22.971 [2024-10-06 11:23:20.089938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fe80 is same with the state(6) to be set 00:28:22.971 [2024-10-06 11:23:20.089965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57370 is same with the state(6) to be set 00:28:22.971 [2024-10-06 11:23:20.089992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6af80 is same with the state(6) to be set 00:28:22.971 [2024-10-06 11:23:20.090021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66080 is same with the state(6) to be set 00:28:22.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:22.971 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2165316 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2165316 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 2165316 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.907 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:23.907 rmmod nvme_tcp 00:28:23.907 rmmod nvme_fabrics 00:28:23.907 rmmod nvme_keyring 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 2165169 ']' 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 2165169 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 2165169 ']' 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 2165169 00:28:24.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2165169) - No such process 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 2165169 is not found' 00:28:24.167 Process with pid 2165169 is not found 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.167 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.069 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:26.069 00:28:26.069 real 0m9.709s 00:28:26.069 user 0m25.015s 00:28:26.069 sys 0m5.028s 00:28:26.069 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:26.069 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:26.069 ************************************ 00:28:26.069 END TEST nvmf_shutdown_tc4 00:28:26.069 ************************************ 00:28:26.069 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:28:26.069 00:28:26.069 real 0m39.650s 00:28:26.069 user 1m38.410s 00:28:26.069 sys 0m13.470s 00:28:26.069 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:26.069 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:26.069 ************************************ 00:28:26.069 END TEST nvmf_shutdown 00:28:26.069 ************************************ 00:28:26.069 11:23:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:26.069 00:28:26.069 real 17m59.650s 00:28:26.069 user 48m31.029s 00:28:26.069 sys 4m27.201s 00:28:26.069 11:23:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:26.069 11:23:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:26.069 ************************************ 00:28:26.069 END TEST nvmf_target_extra 00:28:26.069 ************************************ 00:28:26.329 11:23:23 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:26.329 11:23:23 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:26.329 11:23:23 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:26.329 11:23:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:26.329 ************************************ 00:28:26.329 START TEST nvmf_host 00:28:26.329 ************************************ 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:26.329 * Looking for test storage... 00:28:26.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:26.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.329 --rc genhtml_branch_coverage=1 00:28:26.329 --rc genhtml_function_coverage=1 00:28:26.329 --rc genhtml_legend=1 00:28:26.329 --rc geninfo_all_blocks=1 00:28:26.329 --rc geninfo_unexecuted_blocks=1 00:28:26.329 00:28:26.329 ' 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:26.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.329 --rc genhtml_branch_coverage=1 00:28:26.329 --rc genhtml_function_coverage=1 00:28:26.329 --rc genhtml_legend=1 00:28:26.329 --rc geninfo_all_blocks=1 00:28:26.329 --rc geninfo_unexecuted_blocks=1 00:28:26.329 00:28:26.329 ' 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:26.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.329 --rc genhtml_branch_coverage=1 00:28:26.329 --rc genhtml_function_coverage=1 00:28:26.329 --rc genhtml_legend=1 00:28:26.329 --rc geninfo_all_blocks=1 00:28:26.329 --rc geninfo_unexecuted_blocks=1 00:28:26.329 00:28:26.329 ' 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:26.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.329 --rc genhtml_branch_coverage=1 00:28:26.329 --rc genhtml_function_coverage=1 00:28:26.329 --rc genhtml_legend=1 00:28:26.329 --rc geninfo_all_blocks=1 00:28:26.329 --rc geninfo_unexecuted_blocks=1 00:28:26.329 00:28:26.329 ' 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.329 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:26.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:26.330 11:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.589 ************************************ 00:28:26.589 START TEST nvmf_multicontroller 00:28:26.589 ************************************ 00:28:26.589 11:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:26.589 * Looking for test storage... 00:28:26.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:26.589 11:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:26.589 11:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:28:26.589 11:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:26.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.589 --rc genhtml_branch_coverage=1 00:28:26.589 --rc genhtml_function_coverage=1 00:28:26.589 --rc genhtml_legend=1 00:28:26.589 --rc geninfo_all_blocks=1 00:28:26.589 --rc geninfo_unexecuted_blocks=1 00:28:26.589 00:28:26.589 ' 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:26.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.589 --rc genhtml_branch_coverage=1 00:28:26.589 --rc genhtml_function_coverage=1 00:28:26.589 --rc genhtml_legend=1 00:28:26.589 --rc geninfo_all_blocks=1 00:28:26.589 --rc geninfo_unexecuted_blocks=1 00:28:26.589 00:28:26.589 ' 00:28:26.589 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:26.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.589 --rc genhtml_branch_coverage=1 00:28:26.589 --rc genhtml_function_coverage=1 00:28:26.589 --rc genhtml_legend=1 00:28:26.589 --rc geninfo_all_blocks=1 00:28:26.589 --rc geninfo_unexecuted_blocks=1 00:28:26.590 00:28:26.590 ' 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:26.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.590 --rc genhtml_branch_coverage=1 00:28:26.590 --rc genhtml_function_coverage=1 00:28:26.590 --rc genhtml_legend=1 00:28:26.590 --rc geninfo_all_blocks=1 00:28:26.590 --rc geninfo_unexecuted_blocks=1 00:28:26.590 00:28:26.590 ' 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:26.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:28:26.590 11:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:31.856 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:31.856 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.856 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:31.857 Found net devices under 0000:af:00.0: cvl_0_0 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:31.857 Found net devices under 0000:af:00.1: cvl_0_1 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:31.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:31.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:28:31.857 00:28:31.857 --- 10.0.0.2 ping statistics --- 00:28:31.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.857 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:31.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:31.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:28:31.857 00:28:31.857 --- 10.0.0.1 ping statistics --- 00:28:31.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.857 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=2169792 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 2169792 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2169792 ']' 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:31.857 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:31.857 [2024-10-06 11:23:29.362252] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:28:31.857 [2024-10-06 11:23:29.362301] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:31.857 [2024-10-06 11:23:29.422087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:32.117 [2024-10-06 11:23:29.461283] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.117 [2024-10-06 11:23:29.461321] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.117 [2024-10-06 11:23:29.461328] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.117 [2024-10-06 11:23:29.461334] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.117 [2024-10-06 11:23:29.461339] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.117 [2024-10-06 11:23:29.462245] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:32.117 [2024-10-06 11:23:29.462332] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:32.117 [2024-10-06 11:23:29.462333] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.117 [2024-10-06 11:23:29.591500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.117 Malloc0 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.117 [2024-10-06 11:23:29.658190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.117 [2024-10-06 11:23:29.666128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.117 Malloc1 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.117 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.375 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.375 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:32.375 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.375 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.375 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.375 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:32.375 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.375 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.375 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.375 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:32.375 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.375 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.375 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.375 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2169881 00:28:32.375 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:32.375 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:32.376 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2169881 /var/tmp/bdevperf.sock 00:28:32.376 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2169881 ']' 00:28:32.376 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:32.376 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:32.376 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:32.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:32.376 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:32.376 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.634 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:32.634 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:28:32.634 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:28:32.634 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.634 11:23:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.634 NVMe0n1 00:28:32.634 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.634 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:32.634 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:32.634 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.634 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.634 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.634 1 00:28:32.634 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:32.634 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:32.634 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:32.634 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:32.634 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:32.634 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:32.634 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.635 request: 00:28:32.635 { 00:28:32.635 "name": "NVMe0", 00:28:32.635 "trtype": "tcp", 00:28:32.635 "traddr": "10.0.0.2", 00:28:32.635 "adrfam": "ipv4", 00:28:32.635 "trsvcid": "4420", 00:28:32.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:32.635 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:32.635 "hostaddr": "10.0.0.1", 00:28:32.635 "prchk_reftag": false, 00:28:32.635 "prchk_guard": false, 00:28:32.635 "hdgst": false, 00:28:32.635 "ddgst": false, 00:28:32.635 "allow_unrecognized_csi": false, 00:28:32.635 "method": "bdev_nvme_attach_controller", 00:28:32.635 "req_id": 1 00:28:32.635 } 00:28:32.635 Got JSON-RPC error response 00:28:32.635 response: 00:28:32.635 { 00:28:32.635 "code": -114, 00:28:32.635 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:32.635 } 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.635 request: 00:28:32.635 { 00:28:32.635 "name": "NVMe0", 00:28:32.635 "trtype": "tcp", 00:28:32.635 "traddr": "10.0.0.2", 00:28:32.635 "adrfam": "ipv4", 00:28:32.635 "trsvcid": "4420", 00:28:32.635 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:32.635 "hostaddr": "10.0.0.1", 00:28:32.635 "prchk_reftag": false, 00:28:32.635 "prchk_guard": false, 00:28:32.635 "hdgst": false, 00:28:32.635 "ddgst": false, 00:28:32.635 "allow_unrecognized_csi": false, 00:28:32.635 "method": "bdev_nvme_attach_controller", 00:28:32.635 "req_id": 1 00:28:32.635 } 00:28:32.635 Got JSON-RPC error response 00:28:32.635 response: 00:28:32.635 { 00:28:32.635 "code": -114, 00:28:32.635 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:32.635 } 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.635 request: 00:28:32.635 { 00:28:32.635 "name": "NVMe0", 00:28:32.635 "trtype": "tcp", 00:28:32.635 "traddr": "10.0.0.2", 00:28:32.635 "adrfam": "ipv4", 00:28:32.635 "trsvcid": "4420", 00:28:32.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:32.635 "hostaddr": "10.0.0.1", 00:28:32.635 "prchk_reftag": false, 00:28:32.635 "prchk_guard": false, 00:28:32.635 "hdgst": false, 00:28:32.635 "ddgst": false, 00:28:32.635 "multipath": "disable", 00:28:32.635 "allow_unrecognized_csi": false, 00:28:32.635 "method": "bdev_nvme_attach_controller", 00:28:32.635 "req_id": 1 00:28:32.635 } 00:28:32.635 Got JSON-RPC error response 00:28:32.635 response: 00:28:32.635 { 00:28:32.635 "code": -114, 00:28:32.635 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:28:32.635 } 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.635 request: 00:28:32.635 { 00:28:32.635 "name": "NVMe0", 00:28:32.635 "trtype": "tcp", 00:28:32.635 "traddr": "10.0.0.2", 00:28:32.635 "adrfam": "ipv4", 00:28:32.635 "trsvcid": "4420", 00:28:32.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:32.635 "hostaddr": "10.0.0.1", 00:28:32.635 "prchk_reftag": false, 00:28:32.635 "prchk_guard": false, 00:28:32.635 "hdgst": false, 00:28:32.635 "ddgst": false, 00:28:32.635 "multipath": "failover", 00:28:32.635 "allow_unrecognized_csi": false, 00:28:32.635 "method": "bdev_nvme_attach_controller", 00:28:32.635 "req_id": 1 00:28:32.635 } 00:28:32.635 Got JSON-RPC error response 00:28:32.635 response: 00:28:32.635 { 00:28:32.635 "code": -114, 00:28:32.635 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:32.635 } 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.635 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.894 00:28:32.894 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.894 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:32.894 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.894 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.894 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.894 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:28:32.894 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.894 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.894 00:28:32.894 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.894 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:32.894 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:32.894 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.894 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.894 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.894 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:32.894 11:23:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:34.267 { 00:28:34.267 "results": [ 00:28:34.267 { 00:28:34.267 "job": "NVMe0n1", 00:28:34.267 "core_mask": "0x1", 00:28:34.267 "workload": "write", 00:28:34.267 "status": "finished", 00:28:34.267 "queue_depth": 128, 00:28:34.267 "io_size": 4096, 00:28:34.267 "runtime": 1.004477, 00:28:34.267 "iops": 24586.924339731024, 00:28:34.267 "mibps": 96.04267320207431, 00:28:34.267 "io_failed": 0, 00:28:34.267 "io_timeout": 0, 00:28:34.267 "avg_latency_us": 5199.801304419083, 00:28:34.267 "min_latency_us": 3198.7809523809524, 00:28:34.267 "max_latency_us": 13856.182857142858 00:28:34.267 } 00:28:34.267 ], 00:28:34.267 "core_count": 1 00:28:34.267 } 00:28:34.267 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:34.267 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.267 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:34.267 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.267 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:28:34.267 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2169881 00:28:34.267 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2169881 ']' 00:28:34.267 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2169881 00:28:34.267 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:28:34.267 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:34.268 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2169881 00:28:34.268 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:34.268 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:34.268 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2169881' 00:28:34.268 killing process with pid 2169881 00:28:34.268 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2169881 00:28:34.268 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2169881 00:28:34.268 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:34.268 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.268 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:28:34.527 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:34.527 [2024-10-06 11:23:29.778434] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:28:34.527 [2024-10-06 11:23:29.778485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169881 ] 00:28:34.527 [2024-10-06 11:23:29.834189] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.527 [2024-10-06 11:23:29.874834] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.527 [2024-10-06 11:23:30.441321] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name a93a2bff-4fb5-418d-ac60-4adb0df14953 already exists 00:28:34.527 [2024-10-06 11:23:30.441350] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:a93a2bff-4fb5-418d-ac60-4adb0df14953 alias for bdev NVMe1n1 00:28:34.527 [2024-10-06 11:23:30.441358] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:34.527 Running I/O for 1 seconds... 00:28:34.527 24569.00 IOPS, 95.97 MiB/s 00:28:34.527 Latency(us) 00:28:34.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.527 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:34.527 NVMe0n1 : 1.00 24586.92 96.04 0.00 0.00 5199.80 3198.78 13856.18 00:28:34.527 =================================================================================================================== 00:28:34.527 Total : 24586.92 96.04 0.00 0.00 5199.80 3198.78 13856.18 00:28:34.527 Received shutdown signal, test time was about 1.000000 seconds 00:28:34.527 00:28:34.527 Latency(us) 00:28:34.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.527 =================================================================================================================== 00:28:34.527 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:34.527 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:34.527 rmmod nvme_tcp 00:28:34.527 rmmod nvme_fabrics 00:28:34.527 rmmod nvme_keyring 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 2169792 ']' 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 2169792 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2169792 ']' 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2169792 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:34.527 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2169792 00:28:34.527 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:34.527 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:34.527 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2169792' 00:28:34.527 killing process with pid 2169792 00:28:34.527 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2169792 00:28:34.527 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2169792 00:28:34.786 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:34.786 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:34.786 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:34.786 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:28:34.786 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:28:34.786 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:34.786 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:28:34.786 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:34.786 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:34.786 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.786 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.786 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:37.318 00:28:37.318 real 0m10.388s 00:28:37.318 user 0m11.777s 00:28:37.318 sys 0m4.712s 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:37.318 ************************************ 00:28:37.318 END TEST nvmf_multicontroller 00:28:37.318 ************************************ 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.318 ************************************ 00:28:37.318 START TEST nvmf_aer 00:28:37.318 ************************************ 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:37.318 * Looking for test storage... 00:28:37.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.318 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:37.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.319 --rc genhtml_branch_coverage=1 00:28:37.319 --rc genhtml_function_coverage=1 00:28:37.319 --rc genhtml_legend=1 00:28:37.319 --rc geninfo_all_blocks=1 00:28:37.319 --rc geninfo_unexecuted_blocks=1 00:28:37.319 00:28:37.319 ' 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:37.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.319 --rc genhtml_branch_coverage=1 00:28:37.319 --rc genhtml_function_coverage=1 00:28:37.319 --rc genhtml_legend=1 00:28:37.319 --rc geninfo_all_blocks=1 00:28:37.319 --rc geninfo_unexecuted_blocks=1 00:28:37.319 00:28:37.319 ' 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:37.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.319 --rc genhtml_branch_coverage=1 00:28:37.319 --rc genhtml_function_coverage=1 00:28:37.319 --rc genhtml_legend=1 00:28:37.319 --rc geninfo_all_blocks=1 00:28:37.319 --rc geninfo_unexecuted_blocks=1 00:28:37.319 00:28:37.319 ' 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:37.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.319 --rc genhtml_branch_coverage=1 00:28:37.319 --rc genhtml_function_coverage=1 00:28:37.319 --rc genhtml_legend=1 00:28:37.319 --rc geninfo_all_blocks=1 00:28:37.319 --rc geninfo_unexecuted_blocks=1 00:28:37.319 00:28:37.319 ' 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:37.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:28:37.319 11:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:42.597 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:42.597 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:28:42.597 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:42.597 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:42.597 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:42.597 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:42.597 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:42.597 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:28:42.597 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:42.597 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:28:42.597 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:28:42.597 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:28:42.597 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:42.598 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:42.598 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:42.598 Found net devices under 0000:af:00.0: cvl_0_0 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:42.598 Found net devices under 0000:af:00.1: cvl_0_1 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:42.598 11:23:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:42.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:28:42.598 00:28:42.598 --- 10.0.0.2 ping statistics --- 00:28:42.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.598 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:42.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:28:42.598 00:28:42.598 --- 10.0.0.1 ping statistics --- 00:28:42.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.598 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=2173615 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 2173615 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2173615 ']' 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:42.598 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:42.857 [2024-10-06 11:23:40.194751] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:28:42.857 [2024-10-06 11:23:40.194799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.857 [2024-10-06 11:23:40.256706] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:42.857 [2024-10-06 11:23:40.297109] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.857 [2024-10-06 11:23:40.297150] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.857 [2024-10-06 11:23:40.297158] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.857 [2024-10-06 11:23:40.297164] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.857 [2024-10-06 11:23:40.297169] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.857 [2024-10-06 11:23:40.298637] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.857 [2024-10-06 11:23:40.298741] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.857 [2024-10-06 11:23:40.298837] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:42.857 [2024-10-06 11:23:40.298839] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.857 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:42.857 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:28:42.857 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:42.857 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:42.857 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.116 [2024-10-06 11:23:40.460515] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.116 Malloc0 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.116 [2024-10-06 11:23:40.511741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.116 [ 00:28:43.116 { 00:28:43.116 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:43.116 "subtype": "Discovery", 00:28:43.116 "listen_addresses": [], 00:28:43.116 "allow_any_host": true, 00:28:43.116 "hosts": [] 00:28:43.116 }, 00:28:43.116 { 00:28:43.116 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:43.116 "subtype": "NVMe", 00:28:43.116 "listen_addresses": [ 00:28:43.116 { 00:28:43.116 "trtype": "TCP", 00:28:43.116 "adrfam": "IPv4", 00:28:43.116 "traddr": "10.0.0.2", 00:28:43.116 "trsvcid": "4420" 00:28:43.116 } 00:28:43.116 ], 00:28:43.116 "allow_any_host": true, 00:28:43.116 "hosts": [], 00:28:43.116 "serial_number": "SPDK00000000000001", 00:28:43.116 "model_number": "SPDK bdev Controller", 00:28:43.116 "max_namespaces": 2, 00:28:43.116 "min_cntlid": 1, 00:28:43.116 "max_cntlid": 65519, 00:28:43.116 "namespaces": [ 00:28:43.116 { 00:28:43.116 "nsid": 1, 00:28:43.116 "bdev_name": "Malloc0", 00:28:43.116 "name": "Malloc0", 00:28:43.116 "nguid": "22D7CB589B7B43ECA7D4878C2B42F164", 00:28:43.116 "uuid": "22d7cb58-9b7b-43ec-a7d4-878c2b42f164" 00:28:43.116 } 00:28:43.116 ] 00:28:43.116 } 00:28:43.116 ] 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2173834 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:28:43.116 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.375 Malloc1 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.375 [ 00:28:43.375 { 00:28:43.375 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:43.375 "subtype": "Discovery", 00:28:43.375 "listen_addresses": [], 00:28:43.375 "allow_any_host": true, 00:28:43.375 "hosts": [] 00:28:43.375 }, 00:28:43.375 { 00:28:43.375 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:43.375 "subtype": "NVMe", 00:28:43.375 "listen_addresses": [ 00:28:43.375 { 00:28:43.375 "trtype": "TCP", 00:28:43.375 "adrfam": "IPv4", 00:28:43.375 "traddr": "10.0.0.2", 00:28:43.375 "trsvcid": "4420" 00:28:43.375 Asynchronous Event Request test 00:28:43.375 Attaching to 10.0.0.2 00:28:43.375 Attached to 10.0.0.2 00:28:43.375 Registering asynchronous event callbacks... 00:28:43.375 Starting namespace attribute notice tests for all controllers... 00:28:43.375 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:43.375 aer_cb - Changed Namespace 00:28:43.375 Cleaning up... 00:28:43.375 } 00:28:43.375 ], 00:28:43.375 "allow_any_host": true, 00:28:43.375 "hosts": [], 00:28:43.375 "serial_number": "SPDK00000000000001", 00:28:43.375 "model_number": "SPDK bdev Controller", 00:28:43.375 "max_namespaces": 2, 00:28:43.375 "min_cntlid": 1, 00:28:43.375 "max_cntlid": 65519, 00:28:43.375 "namespaces": [ 00:28:43.375 { 00:28:43.375 "nsid": 1, 00:28:43.375 "bdev_name": "Malloc0", 00:28:43.375 "name": "Malloc0", 00:28:43.375 "nguid": "22D7CB589B7B43ECA7D4878C2B42F164", 00:28:43.375 "uuid": "22d7cb58-9b7b-43ec-a7d4-878c2b42f164" 00:28:43.375 }, 00:28:43.375 { 00:28:43.375 "nsid": 2, 00:28:43.375 "bdev_name": "Malloc1", 00:28:43.375 "name": "Malloc1", 00:28:43.375 "nguid": "158E037EF6E44D83BAD120F94637D1EB", 00:28:43.375 "uuid": "158e037e-f6e4-4d83-bad1-20f94637d1eb" 00:28:43.375 } 00:28:43.375 ] 00:28:43.375 } 00:28:43.375 ] 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2173834 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:43.375 rmmod nvme_tcp 00:28:43.375 rmmod nvme_fabrics 00:28:43.375 rmmod nvme_keyring 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 2173615 ']' 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 2173615 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2173615 ']' 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2173615 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:43.375 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2173615 00:28:43.634 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:43.634 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:43.634 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2173615' 00:28:43.634 killing process with pid 2173615 00:28:43.634 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2173615 00:28:43.634 11:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2173615 00:28:43.634 11:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:43.634 11:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:43.634 11:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:43.634 11:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:28:43.634 11:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:28:43.634 11:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:28:43.634 11:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:43.634 11:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:43.634 11:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:43.634 11:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.634 11:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.634 11:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:46.174 00:28:46.174 real 0m8.880s 00:28:46.174 user 0m4.942s 00:28:46.174 sys 0m4.670s 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:46.174 ************************************ 00:28:46.174 END TEST nvmf_aer 00:28:46.174 ************************************ 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.174 ************************************ 00:28:46.174 START TEST nvmf_async_init 00:28:46.174 ************************************ 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:46.174 * Looking for test storage... 00:28:46.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:46.174 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:46.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.175 --rc genhtml_branch_coverage=1 00:28:46.175 --rc genhtml_function_coverage=1 00:28:46.175 --rc genhtml_legend=1 00:28:46.175 --rc geninfo_all_blocks=1 00:28:46.175 --rc geninfo_unexecuted_blocks=1 00:28:46.175 00:28:46.175 ' 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:46.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.175 --rc genhtml_branch_coverage=1 00:28:46.175 --rc genhtml_function_coverage=1 00:28:46.175 --rc genhtml_legend=1 00:28:46.175 --rc geninfo_all_blocks=1 00:28:46.175 --rc geninfo_unexecuted_blocks=1 00:28:46.175 00:28:46.175 ' 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:46.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.175 --rc genhtml_branch_coverage=1 00:28:46.175 --rc genhtml_function_coverage=1 00:28:46.175 --rc genhtml_legend=1 00:28:46.175 --rc geninfo_all_blocks=1 00:28:46.175 --rc geninfo_unexecuted_blocks=1 00:28:46.175 00:28:46.175 ' 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:46.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.175 --rc genhtml_branch_coverage=1 00:28:46.175 --rc genhtml_function_coverage=1 00:28:46.175 --rc genhtml_legend=1 00:28:46.175 --rc geninfo_all_blocks=1 00:28:46.175 --rc geninfo_unexecuted_blocks=1 00:28:46.175 00:28:46.175 ' 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:46.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=01baf793cf2e42fa892ec67d66f1e836 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:28:46.175 11:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.459 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:51.460 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:51.460 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:51.460 Found net devices under 0000:af:00.0: cvl_0_0 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:51.460 Found net devices under 0000:af:00.1: cvl_0_1 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:51.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:28:51.460 00:28:51.460 --- 10.0.0.2 ping statistics --- 00:28:51.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.460 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:28:51.460 00:28:51.460 --- 10.0.0.1 ping statistics --- 00:28:51.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.460 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:51.460 11:23:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:51.460 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:51.460 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:51.460 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:51.460 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.460 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=2177308 00:28:51.460 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 2177308 00:28:51.460 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:51.460 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2177308 ']' 00:28:51.460 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.460 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:51.460 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.460 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:51.461 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.721 [2024-10-06 11:23:49.071766] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:28:51.721 [2024-10-06 11:23:49.071809] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.721 [2024-10-06 11:23:49.128813] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.721 [2024-10-06 11:23:49.167514] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:51.721 [2024-10-06 11:23:49.167553] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:51.721 [2024-10-06 11:23:49.167561] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:51.721 [2024-10-06 11:23:49.167567] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:51.721 [2024-10-06 11:23:49.167572] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:51.721 [2024-10-06 11:23:49.168118] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.721 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:51.721 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:28:51.721 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:51.721 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:51.721 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.721 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.721 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:51.721 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.721 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.980 [2024-10-06 11:23:49.296902] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.980 null0 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 01baf793cf2e42fa892ec67d66f1e836 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.980 [2024-10-06 11:23:49.353176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.980 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.981 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:51.981 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.981 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:52.240 nvme0n1 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:52.240 [ 00:28:52.240 { 00:28:52.240 "name": "nvme0n1", 00:28:52.240 "aliases": [ 00:28:52.240 "01baf793-cf2e-42fa-892e-c67d66f1e836" 00:28:52.240 ], 00:28:52.240 "product_name": "NVMe disk", 00:28:52.240 "block_size": 512, 00:28:52.240 "num_blocks": 2097152, 00:28:52.240 "uuid": "01baf793-cf2e-42fa-892e-c67d66f1e836", 00:28:52.240 "numa_id": 1, 00:28:52.240 "assigned_rate_limits": { 00:28:52.240 "rw_ios_per_sec": 0, 00:28:52.240 "rw_mbytes_per_sec": 0, 00:28:52.240 "r_mbytes_per_sec": 0, 00:28:52.240 "w_mbytes_per_sec": 0 00:28:52.240 }, 00:28:52.240 "claimed": false, 00:28:52.240 "zoned": false, 00:28:52.240 "supported_io_types": { 00:28:52.240 "read": true, 00:28:52.240 "write": true, 00:28:52.240 "unmap": false, 00:28:52.240 "flush": true, 00:28:52.240 "reset": true, 00:28:52.240 "nvme_admin": true, 00:28:52.240 "nvme_io": true, 00:28:52.240 "nvme_io_md": false, 00:28:52.240 "write_zeroes": true, 00:28:52.240 "zcopy": false, 00:28:52.240 "get_zone_info": false, 00:28:52.240 "zone_management": false, 00:28:52.240 "zone_append": false, 00:28:52.240 "compare": true, 00:28:52.240 "compare_and_write": true, 00:28:52.240 "abort": true, 00:28:52.240 "seek_hole": false, 00:28:52.240 "seek_data": false, 00:28:52.240 "copy": true, 00:28:52.240 "nvme_iov_md": false 00:28:52.240 }, 00:28:52.240 "memory_domains": [ 00:28:52.240 { 00:28:52.240 "dma_device_id": "system", 00:28:52.240 "dma_device_type": 1 00:28:52.240 } 00:28:52.240 ], 00:28:52.240 "driver_specific": { 00:28:52.240 "nvme": [ 00:28:52.240 { 00:28:52.240 "trid": { 00:28:52.240 "trtype": "TCP", 00:28:52.240 "adrfam": "IPv4", 00:28:52.240 "traddr": "10.0.0.2", 00:28:52.240 "trsvcid": "4420", 00:28:52.240 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:52.240 }, 00:28:52.240 "ctrlr_data": { 00:28:52.240 "cntlid": 1, 00:28:52.240 "vendor_id": "0x8086", 00:28:52.240 "model_number": "SPDK bdev Controller", 00:28:52.240 "serial_number": "00000000000000000000", 00:28:52.240 "firmware_revision": "25.01", 00:28:52.240 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:52.240 "oacs": { 00:28:52.240 "security": 0, 00:28:52.240 "format": 0, 00:28:52.240 "firmware": 0, 00:28:52.240 "ns_manage": 0 00:28:52.240 }, 00:28:52.240 "multi_ctrlr": true, 00:28:52.240 "ana_reporting": false 00:28:52.240 }, 00:28:52.240 "vs": { 00:28:52.240 "nvme_version": "1.3" 00:28:52.240 }, 00:28:52.240 "ns_data": { 00:28:52.240 "id": 1, 00:28:52.240 "can_share": true 00:28:52.240 } 00:28:52.240 } 00:28:52.240 ], 00:28:52.240 "mp_policy": "active_passive" 00:28:52.240 } 00:28:52.240 } 00:28:52.240 ] 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:52.240 [2024-10-06 11:23:49.613690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:52.240 [2024-10-06 11:23:49.613744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113d190 (9): Bad file descriptor 00:28:52.240 [2024-10-06 11:23:49.746133] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:52.240 [ 00:28:52.240 { 00:28:52.240 "name": "nvme0n1", 00:28:52.240 "aliases": [ 00:28:52.240 "01baf793-cf2e-42fa-892e-c67d66f1e836" 00:28:52.240 ], 00:28:52.240 "product_name": "NVMe disk", 00:28:52.240 "block_size": 512, 00:28:52.240 "num_blocks": 2097152, 00:28:52.240 "uuid": "01baf793-cf2e-42fa-892e-c67d66f1e836", 00:28:52.240 "numa_id": 1, 00:28:52.240 "assigned_rate_limits": { 00:28:52.240 "rw_ios_per_sec": 0, 00:28:52.240 "rw_mbytes_per_sec": 0, 00:28:52.240 "r_mbytes_per_sec": 0, 00:28:52.240 "w_mbytes_per_sec": 0 00:28:52.240 }, 00:28:52.240 "claimed": false, 00:28:52.240 "zoned": false, 00:28:52.240 "supported_io_types": { 00:28:52.240 "read": true, 00:28:52.240 "write": true, 00:28:52.240 "unmap": false, 00:28:52.240 "flush": true, 00:28:52.240 "reset": true, 00:28:52.240 "nvme_admin": true, 00:28:52.240 "nvme_io": true, 00:28:52.240 "nvme_io_md": false, 00:28:52.240 "write_zeroes": true, 00:28:52.240 "zcopy": false, 00:28:52.240 "get_zone_info": false, 00:28:52.240 "zone_management": false, 00:28:52.240 "zone_append": false, 00:28:52.240 "compare": true, 00:28:52.240 "compare_and_write": true, 00:28:52.240 "abort": true, 00:28:52.240 "seek_hole": false, 00:28:52.240 "seek_data": false, 00:28:52.240 "copy": true, 00:28:52.240 "nvme_iov_md": false 00:28:52.240 }, 00:28:52.240 "memory_domains": [ 00:28:52.240 { 00:28:52.240 "dma_device_id": "system", 00:28:52.240 "dma_device_type": 1 00:28:52.240 } 00:28:52.240 ], 00:28:52.240 "driver_specific": { 00:28:52.240 "nvme": [ 00:28:52.240 { 00:28:52.240 "trid": { 00:28:52.240 "trtype": "TCP", 00:28:52.240 "adrfam": "IPv4", 00:28:52.240 "traddr": "10.0.0.2", 00:28:52.240 "trsvcid": "4420", 00:28:52.240 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:52.240 }, 00:28:52.240 "ctrlr_data": { 00:28:52.240 "cntlid": 2, 00:28:52.240 "vendor_id": "0x8086", 00:28:52.240 "model_number": "SPDK bdev Controller", 00:28:52.240 "serial_number": "00000000000000000000", 00:28:52.240 "firmware_revision": "25.01", 00:28:52.240 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:52.240 "oacs": { 00:28:52.240 "security": 0, 00:28:52.240 "format": 0, 00:28:52.240 "firmware": 0, 00:28:52.240 "ns_manage": 0 00:28:52.240 }, 00:28:52.240 "multi_ctrlr": true, 00:28:52.240 "ana_reporting": false 00:28:52.240 }, 00:28:52.240 "vs": { 00:28:52.240 "nvme_version": "1.3" 00:28:52.240 }, 00:28:52.240 "ns_data": { 00:28:52.240 "id": 1, 00:28:52.240 "can_share": true 00:28:52.240 } 00:28:52.240 } 00:28:52.240 ], 00:28:52.240 "mp_policy": "active_passive" 00:28:52.240 } 00:28:52.240 } 00:28:52.240 ] 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.1TmdYiLfao 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.1TmdYiLfao 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.1TmdYiLfao 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.240 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:52.501 [2024-10-06 11:23:49.818302] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:52.501 [2024-10-06 11:23:49.818387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:52.501 [2024-10-06 11:23:49.842380] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:52.501 nvme0n1 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:52.501 [ 00:28:52.501 { 00:28:52.501 "name": "nvme0n1", 00:28:52.501 "aliases": [ 00:28:52.501 "01baf793-cf2e-42fa-892e-c67d66f1e836" 00:28:52.501 ], 00:28:52.501 "product_name": "NVMe disk", 00:28:52.501 "block_size": 512, 00:28:52.501 "num_blocks": 2097152, 00:28:52.501 "uuid": "01baf793-cf2e-42fa-892e-c67d66f1e836", 00:28:52.501 "numa_id": 1, 00:28:52.501 "assigned_rate_limits": { 00:28:52.501 "rw_ios_per_sec": 0, 00:28:52.501 "rw_mbytes_per_sec": 0, 00:28:52.501 "r_mbytes_per_sec": 0, 00:28:52.501 "w_mbytes_per_sec": 0 00:28:52.501 }, 00:28:52.501 "claimed": false, 00:28:52.501 "zoned": false, 00:28:52.501 "supported_io_types": { 00:28:52.501 "read": true, 00:28:52.501 "write": true, 00:28:52.501 "unmap": false, 00:28:52.501 "flush": true, 00:28:52.501 "reset": true, 00:28:52.501 "nvme_admin": true, 00:28:52.501 "nvme_io": true, 00:28:52.501 "nvme_io_md": false, 00:28:52.501 "write_zeroes": true, 00:28:52.501 "zcopy": false, 00:28:52.501 "get_zone_info": false, 00:28:52.501 "zone_management": false, 00:28:52.501 "zone_append": false, 00:28:52.501 "compare": true, 00:28:52.501 "compare_and_write": true, 00:28:52.501 "abort": true, 00:28:52.501 "seek_hole": false, 00:28:52.501 "seek_data": false, 00:28:52.501 "copy": true, 00:28:52.501 "nvme_iov_md": false 00:28:52.501 }, 00:28:52.501 "memory_domains": [ 00:28:52.501 { 00:28:52.501 "dma_device_id": "system", 00:28:52.501 "dma_device_type": 1 00:28:52.501 } 00:28:52.501 ], 00:28:52.501 "driver_specific": { 00:28:52.501 "nvme": [ 00:28:52.501 { 00:28:52.501 "trid": { 00:28:52.501 "trtype": "TCP", 00:28:52.501 "adrfam": "IPv4", 00:28:52.501 "traddr": "10.0.0.2", 00:28:52.501 "trsvcid": "4421", 00:28:52.501 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:52.501 }, 00:28:52.501 "ctrlr_data": { 00:28:52.501 "cntlid": 3, 00:28:52.501 "vendor_id": "0x8086", 00:28:52.501 "model_number": "SPDK bdev Controller", 00:28:52.501 "serial_number": "00000000000000000000", 00:28:52.501 "firmware_revision": "25.01", 00:28:52.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:52.501 "oacs": { 00:28:52.501 "security": 0, 00:28:52.501 "format": 0, 00:28:52.501 "firmware": 0, 00:28:52.501 "ns_manage": 0 00:28:52.501 }, 00:28:52.501 "multi_ctrlr": true, 00:28:52.501 "ana_reporting": false 00:28:52.501 }, 00:28:52.501 "vs": { 00:28:52.501 "nvme_version": "1.3" 00:28:52.501 }, 00:28:52.501 "ns_data": { 00:28:52.501 "id": 1, 00:28:52.501 "can_share": true 00:28:52.501 } 00:28:52.501 } 00:28:52.501 ], 00:28:52.501 "mp_policy": "active_passive" 00:28:52.501 } 00:28:52.501 } 00:28:52.501 ] 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.1TmdYiLfao 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:52.501 rmmod nvme_tcp 00:28:52.501 rmmod nvme_fabrics 00:28:52.501 rmmod nvme_keyring 00:28:52.501 11:23:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:52.501 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:28:52.501 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:28:52.501 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 2177308 ']' 00:28:52.501 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 2177308 00:28:52.501 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2177308 ']' 00:28:52.501 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2177308 00:28:52.501 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:28:52.501 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:52.501 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2177308 00:28:52.501 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:52.501 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:52.501 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2177308' 00:28:52.501 killing process with pid 2177308 00:28:52.501 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2177308 00:28:52.501 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2177308 00:28:52.761 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:52.761 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:52.761 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:52.761 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:28:52.761 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:28:52.761 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:52.761 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:28:52.761 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:52.761 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:52.761 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.761 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.761 11:23:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.303 11:23:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:55.303 00:28:55.303 real 0m9.016s 00:28:55.303 user 0m3.002s 00:28:55.303 sys 0m4.447s 00:28:55.303 11:23:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:55.303 11:23:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:55.303 ************************************ 00:28:55.303 END TEST nvmf_async_init 00:28:55.303 ************************************ 00:28:55.303 11:23:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:55.303 11:23:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:55.303 11:23:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:55.303 11:23:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.303 ************************************ 00:28:55.303 START TEST dma 00:28:55.303 ************************************ 00:28:55.303 11:23:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:55.303 * Looking for test storage... 00:28:55.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:55.303 11:23:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:55.303 11:23:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:28:55.303 11:23:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:55.303 11:23:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:55.303 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:55.303 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:55.303 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:55.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.304 --rc genhtml_branch_coverage=1 00:28:55.304 --rc genhtml_function_coverage=1 00:28:55.304 --rc genhtml_legend=1 00:28:55.304 --rc geninfo_all_blocks=1 00:28:55.304 --rc geninfo_unexecuted_blocks=1 00:28:55.304 00:28:55.304 ' 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:55.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.304 --rc genhtml_branch_coverage=1 00:28:55.304 --rc genhtml_function_coverage=1 00:28:55.304 --rc genhtml_legend=1 00:28:55.304 --rc geninfo_all_blocks=1 00:28:55.304 --rc geninfo_unexecuted_blocks=1 00:28:55.304 00:28:55.304 ' 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:55.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.304 --rc genhtml_branch_coverage=1 00:28:55.304 --rc genhtml_function_coverage=1 00:28:55.304 --rc genhtml_legend=1 00:28:55.304 --rc geninfo_all_blocks=1 00:28:55.304 --rc geninfo_unexecuted_blocks=1 00:28:55.304 00:28:55.304 ' 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:55.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.304 --rc genhtml_branch_coverage=1 00:28:55.304 --rc genhtml_function_coverage=1 00:28:55.304 --rc genhtml_legend=1 00:28:55.304 --rc geninfo_all_blocks=1 00:28:55.304 --rc geninfo_unexecuted_blocks=1 00:28:55.304 00:28:55.304 ' 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:55.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:28:55.304 00:28:55.304 real 0m0.174s 00:28:55.304 user 0m0.100s 00:28:55.304 sys 0m0.088s 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:28:55.304 ************************************ 00:28:55.304 END TEST dma 00:28:55.304 ************************************ 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.304 ************************************ 00:28:55.304 START TEST nvmf_identify 00:28:55.304 ************************************ 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:55.304 * Looking for test storage... 00:28:55.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:28:55.304 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:55.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.305 --rc genhtml_branch_coverage=1 00:28:55.305 --rc genhtml_function_coverage=1 00:28:55.305 --rc genhtml_legend=1 00:28:55.305 --rc geninfo_all_blocks=1 00:28:55.305 --rc geninfo_unexecuted_blocks=1 00:28:55.305 00:28:55.305 ' 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:55.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.305 --rc genhtml_branch_coverage=1 00:28:55.305 --rc genhtml_function_coverage=1 00:28:55.305 --rc genhtml_legend=1 00:28:55.305 --rc geninfo_all_blocks=1 00:28:55.305 --rc geninfo_unexecuted_blocks=1 00:28:55.305 00:28:55.305 ' 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:55.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.305 --rc genhtml_branch_coverage=1 00:28:55.305 --rc genhtml_function_coverage=1 00:28:55.305 --rc genhtml_legend=1 00:28:55.305 --rc geninfo_all_blocks=1 00:28:55.305 --rc geninfo_unexecuted_blocks=1 00:28:55.305 00:28:55.305 ' 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:55.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.305 --rc genhtml_branch_coverage=1 00:28:55.305 --rc genhtml_function_coverage=1 00:28:55.305 --rc genhtml_legend=1 00:28:55.305 --rc geninfo_all_blocks=1 00:28:55.305 --rc geninfo_unexecuted_blocks=1 00:28:55.305 00:28:55.305 ' 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:55.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:55.305 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:55.306 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.306 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.306 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.306 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:55.306 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:55.306 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:28:55.306 11:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:00.587 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:00.587 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:00.588 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:00.588 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:00.588 Found net devices under 0000:af:00.0: cvl_0_0 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:00.588 Found net devices under 0000:af:00.1: cvl_0_1 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.588 11:23:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.588 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.588 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.588 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:00.588 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.588 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.589 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.589 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:00.849 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:00.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:29:00.849 00:29:00.849 --- 10.0.0.2 ping statistics --- 00:29:00.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.849 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:29:00.849 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:29:00.849 00:29:00.849 --- 10.0.0.1 ping statistics --- 00:29:00.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.849 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:29:00.849 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.849 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:29:00.849 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:00.849 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.849 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:00.849 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:00.849 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.849 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:00.849 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:00.849 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:00.850 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:00.850 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:00.850 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2180854 00:29:00.850 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:00.850 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:00.850 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2180854 00:29:00.850 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2180854 ']' 00:29:00.850 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.850 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:00.850 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.850 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:00.850 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:00.850 [2024-10-06 11:23:58.271795] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:29:00.850 [2024-10-06 11:23:58.271842] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.850 [2024-10-06 11:23:58.330768] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:00.850 [2024-10-06 11:23:58.373869] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.850 [2024-10-06 11:23:58.373909] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.850 [2024-10-06 11:23:58.373916] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.850 [2024-10-06 11:23:58.373922] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.850 [2024-10-06 11:23:58.373927] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.850 [2024-10-06 11:23:58.375282] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.850 [2024-10-06 11:23:58.375379] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.850 [2024-10-06 11:23:58.375478] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:29:00.850 [2024-10-06 11:23:58.375479] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:01.109 [2024-10-06 11:23:58.475562] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:01.109 Malloc0 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.109 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:01.109 [2024-10-06 11:23:58.563220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.110 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.110 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:01.110 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.110 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:01.110 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.110 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:01.110 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.110 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:01.110 [ 00:29:01.110 { 00:29:01.110 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:01.110 "subtype": "Discovery", 00:29:01.110 "listen_addresses": [ 00:29:01.110 { 00:29:01.110 "trtype": "TCP", 00:29:01.110 "adrfam": "IPv4", 00:29:01.110 "traddr": "10.0.0.2", 00:29:01.110 "trsvcid": "4420" 00:29:01.110 } 00:29:01.110 ], 00:29:01.110 "allow_any_host": true, 00:29:01.110 "hosts": [] 00:29:01.110 }, 00:29:01.110 { 00:29:01.110 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:01.110 "subtype": "NVMe", 00:29:01.110 "listen_addresses": [ 00:29:01.110 { 00:29:01.110 "trtype": "TCP", 00:29:01.110 "adrfam": "IPv4", 00:29:01.110 "traddr": "10.0.0.2", 00:29:01.110 "trsvcid": "4420" 00:29:01.110 } 00:29:01.110 ], 00:29:01.110 "allow_any_host": true, 00:29:01.110 "hosts": [], 00:29:01.110 "serial_number": "SPDK00000000000001", 00:29:01.110 "model_number": "SPDK bdev Controller", 00:29:01.110 "max_namespaces": 32, 00:29:01.110 "min_cntlid": 1, 00:29:01.110 "max_cntlid": 65519, 00:29:01.110 "namespaces": [ 00:29:01.110 { 00:29:01.110 "nsid": 1, 00:29:01.110 "bdev_name": "Malloc0", 00:29:01.110 "name": "Malloc0", 00:29:01.110 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:01.110 "eui64": "ABCDEF0123456789", 00:29:01.110 "uuid": "9e1d88f7-b7bc-4688-9a96-1c1c850f56ec" 00:29:01.110 } 00:29:01.110 ] 00:29:01.110 } 00:29:01.110 ] 00:29:01.110 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.110 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:01.110 [2024-10-06 11:23:58.614296] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:29:01.110 [2024-10-06 11:23:58.614337] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180953 ] 00:29:01.110 [2024-10-06 11:23:58.640426] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:01.110 [2024-10-06 11:23:58.640473] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:01.110 [2024-10-06 11:23:58.640477] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:01.110 [2024-10-06 11:23:58.640488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:01.110 [2024-10-06 11:23:58.640496] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:01.110 [2024-10-06 11:23:58.644294] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:01.110 [2024-10-06 11:23:58.644331] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x159ead0 0 00:29:01.110 [2024-10-06 11:23:58.652071] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:01.110 [2024-10-06 11:23:58.652087] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:01.110 [2024-10-06 11:23:58.652091] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:01.110 [2024-10-06 11:23:58.652094] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:01.110 [2024-10-06 11:23:58.652128] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.652134] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.652137] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159ead0) 00:29:01.110 [2024-10-06 11:23:58.652150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:01.110 [2024-10-06 11:23:58.652181] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4300, cid 0, qid 0 00:29:01.110 [2024-10-06 11:23:58.660071] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.110 [2024-10-06 11:23:58.660080] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.110 [2024-10-06 11:23:58.660084] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.660088] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4300) on tqpair=0x159ead0 00:29:01.110 [2024-10-06 11:23:58.660100] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:01.110 [2024-10-06 11:23:58.660107] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:01.110 [2024-10-06 11:23:58.660111] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:01.110 [2024-10-06 11:23:58.660125] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.660129] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.660133] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159ead0) 00:29:01.110 [2024-10-06 11:23:58.660140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.110 [2024-10-06 11:23:58.660157] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4300, cid 0, qid 0 00:29:01.110 [2024-10-06 11:23:58.660331] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.110 [2024-10-06 11:23:58.660337] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.110 [2024-10-06 11:23:58.660340] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.660344] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4300) on tqpair=0x159ead0 00:29:01.110 [2024-10-06 11:23:58.660348] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:01.110 [2024-10-06 11:23:58.660354] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:01.110 [2024-10-06 11:23:58.660361] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.660364] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.660367] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159ead0) 00:29:01.110 [2024-10-06 11:23:58.660373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.110 [2024-10-06 11:23:58.660383] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4300, cid 0, qid 0 00:29:01.110 [2024-10-06 11:23:58.660463] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.110 [2024-10-06 11:23:58.660468] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.110 [2024-10-06 11:23:58.660471] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.660474] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4300) on tqpair=0x159ead0 00:29:01.110 [2024-10-06 11:23:58.660479] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:01.110 [2024-10-06 11:23:58.660486] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:01.110 [2024-10-06 11:23:58.660492] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.660495] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.660498] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159ead0) 00:29:01.110 [2024-10-06 11:23:58.660504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.110 [2024-10-06 11:23:58.660513] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4300, cid 0, qid 0 00:29:01.110 [2024-10-06 11:23:58.660587] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.110 [2024-10-06 11:23:58.660593] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.110 [2024-10-06 11:23:58.660596] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.660599] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4300) on tqpair=0x159ead0 00:29:01.110 [2024-10-06 11:23:58.660604] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:01.110 [2024-10-06 11:23:58.660612] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.660615] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.660618] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159ead0) 00:29:01.110 [2024-10-06 11:23:58.660624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.110 [2024-10-06 11:23:58.660632] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4300, cid 0, qid 0 00:29:01.110 [2024-10-06 11:23:58.660709] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.110 [2024-10-06 11:23:58.660714] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.110 [2024-10-06 11:23:58.660719] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.660722] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4300) on tqpair=0x159ead0 00:29:01.110 [2024-10-06 11:23:58.660727] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:01.110 [2024-10-06 11:23:58.660731] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:01.110 [2024-10-06 11:23:58.660737] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:01.110 [2024-10-06 11:23:58.660842] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:01.110 [2024-10-06 11:23:58.660847] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:01.110 [2024-10-06 11:23:58.660854] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.660857] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.660860] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159ead0) 00:29:01.110 [2024-10-06 11:23:58.660866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.110 [2024-10-06 11:23:58.660875] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4300, cid 0, qid 0 00:29:01.110 [2024-10-06 11:23:58.660947] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.110 [2024-10-06 11:23:58.660952] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.110 [2024-10-06 11:23:58.660955] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.660958] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4300) on tqpair=0x159ead0 00:29:01.110 [2024-10-06 11:23:58.660963] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:01.110 [2024-10-06 11:23:58.660971] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.660974] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.660977] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159ead0) 00:29:01.110 [2024-10-06 11:23:58.660983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.110 [2024-10-06 11:23:58.660992] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4300, cid 0, qid 0 00:29:01.110 [2024-10-06 11:23:58.661073] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.110 [2024-10-06 11:23:58.661079] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.110 [2024-10-06 11:23:58.661082] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661085] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4300) on tqpair=0x159ead0 00:29:01.110 [2024-10-06 11:23:58.661089] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:01.110 [2024-10-06 11:23:58.661093] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:01.110 [2024-10-06 11:23:58.661100] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:01.110 [2024-10-06 11:23:58.661112] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:01.110 [2024-10-06 11:23:58.661120] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661125] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159ead0) 00:29:01.110 [2024-10-06 11:23:58.661131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.110 [2024-10-06 11:23:58.661141] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4300, cid 0, qid 0 00:29:01.110 [2024-10-06 11:23:58.661257] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:01.110 [2024-10-06 11:23:58.661263] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:01.110 [2024-10-06 11:23:58.661266] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661270] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x159ead0): datao=0, datal=4096, cccid=0 00:29:01.110 [2024-10-06 11:23:58.661274] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f4300) on tqpair(0x159ead0): expected_datao=0, payload_size=4096 00:29:01.110 [2024-10-06 11:23:58.661278] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661284] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661288] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661310] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.110 [2024-10-06 11:23:58.661315] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.110 [2024-10-06 11:23:58.661318] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661321] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4300) on tqpair=0x159ead0 00:29:01.110 [2024-10-06 11:23:58.661328] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:01.110 [2024-10-06 11:23:58.661333] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:01.110 [2024-10-06 11:23:58.661337] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:01.110 [2024-10-06 11:23:58.661341] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:01.110 [2024-10-06 11:23:58.661345] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:01.110 [2024-10-06 11:23:58.661349] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:01.110 [2024-10-06 11:23:58.661358] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:01.110 [2024-10-06 11:23:58.661364] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661367] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661370] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159ead0) 00:29:01.110 [2024-10-06 11:23:58.661376] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:01.110 [2024-10-06 11:23:58.661386] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4300, cid 0, qid 0 00:29:01.110 [2024-10-06 11:23:58.661468] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.110 [2024-10-06 11:23:58.661473] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.110 [2024-10-06 11:23:58.661476] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661479] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4300) on tqpair=0x159ead0 00:29:01.110 [2024-10-06 11:23:58.661486] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661489] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661492] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159ead0) 00:29:01.110 [2024-10-06 11:23:58.661500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.110 [2024-10-06 11:23:58.661505] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661508] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661511] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x159ead0) 00:29:01.110 [2024-10-06 11:23:58.661516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.110 [2024-10-06 11:23:58.661521] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661524] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661527] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x159ead0) 00:29:01.110 [2024-10-06 11:23:58.661531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.110 [2024-10-06 11:23:58.661536] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661539] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661542] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.110 [2024-10-06 11:23:58.661547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.110 [2024-10-06 11:23:58.661551] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:01.110 [2024-10-06 11:23:58.661562] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:01.110 [2024-10-06 11:23:58.661568] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661571] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x159ead0) 00:29:01.110 [2024-10-06 11:23:58.661576] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.110 [2024-10-06 11:23:58.661587] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4300, cid 0, qid 0 00:29:01.110 [2024-10-06 11:23:58.661591] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4480, cid 1, qid 0 00:29:01.110 [2024-10-06 11:23:58.661595] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4600, cid 2, qid 0 00:29:01.110 [2024-10-06 11:23:58.661599] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.110 [2024-10-06 11:23:58.661603] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4900, cid 4, qid 0 00:29:01.110 [2024-10-06 11:23:58.661714] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.110 [2024-10-06 11:23:58.661720] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.110 [2024-10-06 11:23:58.661723] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.110 [2024-10-06 11:23:58.661726] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4900) on tqpair=0x159ead0 00:29:01.110 [2024-10-06 11:23:58.661730] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:01.111 [2024-10-06 11:23:58.661735] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:01.111 [2024-10-06 11:23:58.661744] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.111 [2024-10-06 11:23:58.661748] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x159ead0) 00:29:01.111 [2024-10-06 11:23:58.661753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.111 [2024-10-06 11:23:58.661764] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4900, cid 4, qid 0 00:29:01.111 [2024-10-06 11:23:58.661851] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:01.111 [2024-10-06 11:23:58.661857] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:01.111 [2024-10-06 11:23:58.661860] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:01.111 [2024-10-06 11:23:58.661863] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x159ead0): datao=0, datal=4096, cccid=4 00:29:01.111 [2024-10-06 11:23:58.661867] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f4900) on tqpair(0x159ead0): expected_datao=0, payload_size=4096 00:29:01.111 [2024-10-06 11:23:58.661870] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.111 [2024-10-06 11:23:58.661893] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:01.111 [2024-10-06 11:23:58.661897] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.702182] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.373 [2024-10-06 11:23:58.702196] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.373 [2024-10-06 11:23:58.702199] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.702203] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4900) on tqpair=0x159ead0 00:29:01.373 [2024-10-06 11:23:58.702214] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:01.373 [2024-10-06 11:23:58.702242] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.702246] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x159ead0) 00:29:01.373 [2024-10-06 11:23:58.702253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.373 [2024-10-06 11:23:58.702259] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.702262] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.702266] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x159ead0) 00:29:01.373 [2024-10-06 11:23:58.702271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.373 [2024-10-06 11:23:58.702283] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4900, cid 4, qid 0 00:29:01.373 [2024-10-06 11:23:58.702288] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4a80, cid 5, qid 0 00:29:01.373 [2024-10-06 11:23:58.702396] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:01.373 [2024-10-06 11:23:58.702402] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:01.373 [2024-10-06 11:23:58.702406] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.702409] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x159ead0): datao=0, datal=1024, cccid=4 00:29:01.373 [2024-10-06 11:23:58.702412] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f4900) on tqpair(0x159ead0): expected_datao=0, payload_size=1024 00:29:01.373 [2024-10-06 11:23:58.702416] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.702422] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.702425] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.702430] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.373 [2024-10-06 11:23:58.702434] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.373 [2024-10-06 11:23:58.702437] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.702440] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4a80) on tqpair=0x159ead0 00:29:01.373 [2024-10-06 11:23:58.743276] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.373 [2024-10-06 11:23:58.743286] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.373 [2024-10-06 11:23:58.743292] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.743296] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4900) on tqpair=0x159ead0 00:29:01.373 [2024-10-06 11:23:58.743306] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.743310] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x159ead0) 00:29:01.373 [2024-10-06 11:23:58.743316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.373 [2024-10-06 11:23:58.743333] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4900, cid 4, qid 0 00:29:01.373 [2024-10-06 11:23:58.743426] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:01.373 [2024-10-06 11:23:58.743432] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:01.373 [2024-10-06 11:23:58.743435] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.743438] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x159ead0): datao=0, datal=3072, cccid=4 00:29:01.373 [2024-10-06 11:23:58.743442] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f4900) on tqpair(0x159ead0): expected_datao=0, payload_size=3072 00:29:01.373 [2024-10-06 11:23:58.743446] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.743452] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.743455] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.743477] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.373 [2024-10-06 11:23:58.743482] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.373 [2024-10-06 11:23:58.743485] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.743488] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4900) on tqpair=0x159ead0 00:29:01.373 [2024-10-06 11:23:58.743495] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.743499] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x159ead0) 00:29:01.373 [2024-10-06 11:23:58.743504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.373 [2024-10-06 11:23:58.743517] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4900, cid 4, qid 0 00:29:01.373 [2024-10-06 11:23:58.743596] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:01.373 [2024-10-06 11:23:58.743602] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:01.373 [2024-10-06 11:23:58.743605] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.743608] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x159ead0): datao=0, datal=8, cccid=4 00:29:01.373 [2024-10-06 11:23:58.743612] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f4900) on tqpair(0x159ead0): expected_datao=0, payload_size=8 00:29:01.373 [2024-10-06 11:23:58.743615] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.743621] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.743624] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.786070] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.373 [2024-10-06 11:23:58.786082] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.373 [2024-10-06 11:23:58.786085] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.373 [2024-10-06 11:23:58.786089] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4900) on tqpair=0x159ead0 00:29:01.373 ===================================================== 00:29:01.373 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:01.373 ===================================================== 00:29:01.373 Controller Capabilities/Features 00:29:01.373 ================================ 00:29:01.373 Vendor ID: 0000 00:29:01.373 Subsystem Vendor ID: 0000 00:29:01.373 Serial Number: .................... 00:29:01.373 Model Number: ........................................ 00:29:01.373 Firmware Version: 25.01 00:29:01.373 Recommended Arb Burst: 0 00:29:01.373 IEEE OUI Identifier: 00 00 00 00:29:01.373 Multi-path I/O 00:29:01.373 May have multiple subsystem ports: No 00:29:01.373 May have multiple controllers: No 00:29:01.373 Associated with SR-IOV VF: No 00:29:01.373 Max Data Transfer Size: 131072 00:29:01.373 Max Number of Namespaces: 0 00:29:01.373 Max Number of I/O Queues: 1024 00:29:01.373 NVMe Specification Version (VS): 1.3 00:29:01.373 NVMe Specification Version (Identify): 1.3 00:29:01.373 Maximum Queue Entries: 128 00:29:01.373 Contiguous Queues Required: Yes 00:29:01.373 Arbitration Mechanisms Supported 00:29:01.373 Weighted Round Robin: Not Supported 00:29:01.373 Vendor Specific: Not Supported 00:29:01.373 Reset Timeout: 15000 ms 00:29:01.373 Doorbell Stride: 4 bytes 00:29:01.373 NVM Subsystem Reset: Not Supported 00:29:01.373 Command Sets Supported 00:29:01.373 NVM Command Set: Supported 00:29:01.373 Boot Partition: Not Supported 00:29:01.373 Memory Page Size Minimum: 4096 bytes 00:29:01.373 Memory Page Size Maximum: 4096 bytes 00:29:01.373 Persistent Memory Region: Not Supported 00:29:01.373 Optional Asynchronous Events Supported 00:29:01.373 Namespace Attribute Notices: Not Supported 00:29:01.373 Firmware Activation Notices: Not Supported 00:29:01.373 ANA Change Notices: Not Supported 00:29:01.373 PLE Aggregate Log Change Notices: Not Supported 00:29:01.373 LBA Status Info Alert Notices: Not Supported 00:29:01.373 EGE Aggregate Log Change Notices: Not Supported 00:29:01.373 Normal NVM Subsystem Shutdown event: Not Supported 00:29:01.373 Zone Descriptor Change Notices: Not Supported 00:29:01.373 Discovery Log Change Notices: Supported 00:29:01.373 Controller Attributes 00:29:01.374 128-bit Host Identifier: Not Supported 00:29:01.374 Non-Operational Permissive Mode: Not Supported 00:29:01.374 NVM Sets: Not Supported 00:29:01.374 Read Recovery Levels: Not Supported 00:29:01.374 Endurance Groups: Not Supported 00:29:01.374 Predictable Latency Mode: Not Supported 00:29:01.374 Traffic Based Keep ALive: Not Supported 00:29:01.374 Namespace Granularity: Not Supported 00:29:01.374 SQ Associations: Not Supported 00:29:01.374 UUID List: Not Supported 00:29:01.374 Multi-Domain Subsystem: Not Supported 00:29:01.374 Fixed Capacity Management: Not Supported 00:29:01.374 Variable Capacity Management: Not Supported 00:29:01.374 Delete Endurance Group: Not Supported 00:29:01.374 Delete NVM Set: Not Supported 00:29:01.374 Extended LBA Formats Supported: Not Supported 00:29:01.374 Flexible Data Placement Supported: Not Supported 00:29:01.374 00:29:01.374 Controller Memory Buffer Support 00:29:01.374 ================================ 00:29:01.374 Supported: No 00:29:01.374 00:29:01.374 Persistent Memory Region Support 00:29:01.374 ================================ 00:29:01.374 Supported: No 00:29:01.374 00:29:01.374 Admin Command Set Attributes 00:29:01.374 ============================ 00:29:01.374 Security Send/Receive: Not Supported 00:29:01.374 Format NVM: Not Supported 00:29:01.374 Firmware Activate/Download: Not Supported 00:29:01.374 Namespace Management: Not Supported 00:29:01.374 Device Self-Test: Not Supported 00:29:01.374 Directives: Not Supported 00:29:01.374 NVMe-MI: Not Supported 00:29:01.374 Virtualization Management: Not Supported 00:29:01.374 Doorbell Buffer Config: Not Supported 00:29:01.374 Get LBA Status Capability: Not Supported 00:29:01.374 Command & Feature Lockdown Capability: Not Supported 00:29:01.374 Abort Command Limit: 1 00:29:01.374 Async Event Request Limit: 4 00:29:01.374 Number of Firmware Slots: N/A 00:29:01.374 Firmware Slot 1 Read-Only: N/A 00:29:01.374 Firmware Activation Without Reset: N/A 00:29:01.374 Multiple Update Detection Support: N/A 00:29:01.374 Firmware Update Granularity: No Information Provided 00:29:01.374 Per-Namespace SMART Log: No 00:29:01.374 Asymmetric Namespace Access Log Page: Not Supported 00:29:01.374 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:01.374 Command Effects Log Page: Not Supported 00:29:01.374 Get Log Page Extended Data: Supported 00:29:01.374 Telemetry Log Pages: Not Supported 00:29:01.374 Persistent Event Log Pages: Not Supported 00:29:01.374 Supported Log Pages Log Page: May Support 00:29:01.374 Commands Supported & Effects Log Page: Not Supported 00:29:01.374 Feature Identifiers & Effects Log Page:May Support 00:29:01.374 NVMe-MI Commands & Effects Log Page: May Support 00:29:01.374 Data Area 4 for Telemetry Log: Not Supported 00:29:01.374 Error Log Page Entries Supported: 128 00:29:01.374 Keep Alive: Not Supported 00:29:01.374 00:29:01.374 NVM Command Set Attributes 00:29:01.374 ========================== 00:29:01.374 Submission Queue Entry Size 00:29:01.374 Max: 1 00:29:01.374 Min: 1 00:29:01.374 Completion Queue Entry Size 00:29:01.374 Max: 1 00:29:01.374 Min: 1 00:29:01.374 Number of Namespaces: 0 00:29:01.374 Compare Command: Not Supported 00:29:01.374 Write Uncorrectable Command: Not Supported 00:29:01.374 Dataset Management Command: Not Supported 00:29:01.374 Write Zeroes Command: Not Supported 00:29:01.374 Set Features Save Field: Not Supported 00:29:01.374 Reservations: Not Supported 00:29:01.374 Timestamp: Not Supported 00:29:01.374 Copy: Not Supported 00:29:01.374 Volatile Write Cache: Not Present 00:29:01.374 Atomic Write Unit (Normal): 1 00:29:01.374 Atomic Write Unit (PFail): 1 00:29:01.374 Atomic Compare & Write Unit: 1 00:29:01.374 Fused Compare & Write: Supported 00:29:01.374 Scatter-Gather List 00:29:01.374 SGL Command Set: Supported 00:29:01.374 SGL Keyed: Supported 00:29:01.374 SGL Bit Bucket Descriptor: Not Supported 00:29:01.374 SGL Metadata Pointer: Not Supported 00:29:01.374 Oversized SGL: Not Supported 00:29:01.374 SGL Metadata Address: Not Supported 00:29:01.374 SGL Offset: Supported 00:29:01.374 Transport SGL Data Block: Not Supported 00:29:01.374 Replay Protected Memory Block: Not Supported 00:29:01.374 00:29:01.374 Firmware Slot Information 00:29:01.374 ========================= 00:29:01.374 Active slot: 0 00:29:01.374 00:29:01.374 00:29:01.374 Error Log 00:29:01.374 ========= 00:29:01.374 00:29:01.374 Active Namespaces 00:29:01.374 ================= 00:29:01.374 Discovery Log Page 00:29:01.374 ================== 00:29:01.374 Generation Counter: 2 00:29:01.374 Number of Records: 2 00:29:01.374 Record Format: 0 00:29:01.374 00:29:01.374 Discovery Log Entry 0 00:29:01.374 ---------------------- 00:29:01.374 Transport Type: 3 (TCP) 00:29:01.374 Address Family: 1 (IPv4) 00:29:01.374 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:01.374 Entry Flags: 00:29:01.374 Duplicate Returned Information: 1 00:29:01.374 Explicit Persistent Connection Support for Discovery: 1 00:29:01.374 Transport Requirements: 00:29:01.374 Secure Channel: Not Required 00:29:01.374 Port ID: 0 (0x0000) 00:29:01.374 Controller ID: 65535 (0xffff) 00:29:01.374 Admin Max SQ Size: 128 00:29:01.374 Transport Service Identifier: 4420 00:29:01.374 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:01.374 Transport Address: 10.0.0.2 00:29:01.374 Discovery Log Entry 1 00:29:01.374 ---------------------- 00:29:01.374 Transport Type: 3 (TCP) 00:29:01.374 Address Family: 1 (IPv4) 00:29:01.374 Subsystem Type: 2 (NVM Subsystem) 00:29:01.374 Entry Flags: 00:29:01.374 Duplicate Returned Information: 0 00:29:01.374 Explicit Persistent Connection Support for Discovery: 0 00:29:01.374 Transport Requirements: 00:29:01.374 Secure Channel: Not Required 00:29:01.374 Port ID: 0 (0x0000) 00:29:01.374 Controller ID: 65535 (0xffff) 00:29:01.374 Admin Max SQ Size: 128 00:29:01.374 Transport Service Identifier: 4420 00:29:01.374 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:01.374 Transport Address: 10.0.0.2 [2024-10-06 11:23:58.786168] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:01.374 [2024-10-06 11:23:58.786179] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4300) on tqpair=0x159ead0 00:29:01.374 [2024-10-06 11:23:58.786187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.374 [2024-10-06 11:23:58.786192] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4480) on tqpair=0x159ead0 00:29:01.374 [2024-10-06 11:23:58.786196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.374 [2024-10-06 11:23:58.786200] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4600) on tqpair=0x159ead0 00:29:01.374 [2024-10-06 11:23:58.786204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.374 [2024-10-06 11:23:58.786208] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.374 [2024-10-06 11:23:58.786212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.374 [2024-10-06 11:23:58.786219] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.374 [2024-10-06 11:23:58.786223] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.374 [2024-10-06 11:23:58.786226] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.374 [2024-10-06 11:23:58.786233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.374 [2024-10-06 11:23:58.786246] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.374 [2024-10-06 11:23:58.786323] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.374 [2024-10-06 11:23:58.786329] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.374 [2024-10-06 11:23:58.786332] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.374 [2024-10-06 11:23:58.786335] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.374 [2024-10-06 11:23:58.786341] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.374 [2024-10-06 11:23:58.786344] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.374 [2024-10-06 11:23:58.786347] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.374 [2024-10-06 11:23:58.786353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.374 [2024-10-06 11:23:58.786367] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.374 [2024-10-06 11:23:58.786450] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.374 [2024-10-06 11:23:58.786455] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.374 [2024-10-06 11:23:58.786458] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.786461] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.375 [2024-10-06 11:23:58.786466] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:01.375 [2024-10-06 11:23:58.786473] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:01.375 [2024-10-06 11:23:58.786481] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.786484] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.786487] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.375 [2024-10-06 11:23:58.786493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.375 [2024-10-06 11:23:58.786502] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.375 [2024-10-06 11:23:58.786575] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.375 [2024-10-06 11:23:58.786581] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.375 [2024-10-06 11:23:58.786585] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.786589] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.375 [2024-10-06 11:23:58.786598] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.786601] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.786604] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.375 [2024-10-06 11:23:58.786610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.375 [2024-10-06 11:23:58.786619] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.375 [2024-10-06 11:23:58.786695] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.375 [2024-10-06 11:23:58.786700] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.375 [2024-10-06 11:23:58.786703] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.786706] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.375 [2024-10-06 11:23:58.786714] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.786718] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.786721] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.375 [2024-10-06 11:23:58.786726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.375 [2024-10-06 11:23:58.786735] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.375 [2024-10-06 11:23:58.786809] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.375 [2024-10-06 11:23:58.786815] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.375 [2024-10-06 11:23:58.786818] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.786821] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.375 [2024-10-06 11:23:58.786829] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.786832] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.786835] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.375 [2024-10-06 11:23:58.786841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.375 [2024-10-06 11:23:58.786851] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.375 [2024-10-06 11:23:58.786924] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.375 [2024-10-06 11:23:58.786929] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.375 [2024-10-06 11:23:58.786932] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.786935] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.375 [2024-10-06 11:23:58.786943] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.786946] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.786949] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.375 [2024-10-06 11:23:58.786955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.375 [2024-10-06 11:23:58.786964] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.375 [2024-10-06 11:23:58.787035] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.375 [2024-10-06 11:23:58.787041] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.375 [2024-10-06 11:23:58.787043] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787048] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.375 [2024-10-06 11:23:58.787056] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787069] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787072] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.375 [2024-10-06 11:23:58.787078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.375 [2024-10-06 11:23:58.787087] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.375 [2024-10-06 11:23:58.787159] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.375 [2024-10-06 11:23:58.787165] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.375 [2024-10-06 11:23:58.787168] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787171] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.375 [2024-10-06 11:23:58.787179] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787182] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787185] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.375 [2024-10-06 11:23:58.787191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.375 [2024-10-06 11:23:58.787200] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.375 [2024-10-06 11:23:58.787273] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.375 [2024-10-06 11:23:58.787278] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.375 [2024-10-06 11:23:58.787281] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787284] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.375 [2024-10-06 11:23:58.787292] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787295] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787298] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.375 [2024-10-06 11:23:58.787304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.375 [2024-10-06 11:23:58.787313] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.375 [2024-10-06 11:23:58.787387] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.375 [2024-10-06 11:23:58.787393] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.375 [2024-10-06 11:23:58.787395] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787399] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.375 [2024-10-06 11:23:58.787407] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787410] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787413] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.375 [2024-10-06 11:23:58.787419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.375 [2024-10-06 11:23:58.787427] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.375 [2024-10-06 11:23:58.787498] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.375 [2024-10-06 11:23:58.787503] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.375 [2024-10-06 11:23:58.787506] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787510] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.375 [2024-10-06 11:23:58.787519] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787523] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787526] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.375 [2024-10-06 11:23:58.787531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.375 [2024-10-06 11:23:58.787540] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.375 [2024-10-06 11:23:58.787611] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.375 [2024-10-06 11:23:58.787617] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.375 [2024-10-06 11:23:58.787619] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787623] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.375 [2024-10-06 11:23:58.787631] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787634] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787638] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.375 [2024-10-06 11:23:58.787643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.375 [2024-10-06 11:23:58.787652] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.375 [2024-10-06 11:23:58.787725] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.375 [2024-10-06 11:23:58.787731] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.375 [2024-10-06 11:23:58.787733] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.375 [2024-10-06 11:23:58.787737] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.376 [2024-10-06 11:23:58.787744] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.787748] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.787751] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.376 [2024-10-06 11:23:58.787756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.376 [2024-10-06 11:23:58.787765] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.376 [2024-10-06 11:23:58.787836] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.376 [2024-10-06 11:23:58.787842] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.376 [2024-10-06 11:23:58.787845] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.787848] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.376 [2024-10-06 11:23:58.787856] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.787859] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.787862] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.376 [2024-10-06 11:23:58.787868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.376 [2024-10-06 11:23:58.787877] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.376 [2024-10-06 11:23:58.787947] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.376 [2024-10-06 11:23:58.787953] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.376 [2024-10-06 11:23:58.787956] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.787959] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.376 [2024-10-06 11:23:58.787967] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.787973] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.787977] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.376 [2024-10-06 11:23:58.787982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.376 [2024-10-06 11:23:58.787991] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.376 [2024-10-06 11:23:58.788071] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.376 [2024-10-06 11:23:58.788077] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.376 [2024-10-06 11:23:58.788080] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.788083] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.376 [2024-10-06 11:23:58.788091] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.788095] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.788098] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.376 [2024-10-06 11:23:58.788103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.376 [2024-10-06 11:23:58.788112] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.376 [2024-10-06 11:23:58.788188] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.376 [2024-10-06 11:23:58.788194] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.376 [2024-10-06 11:23:58.788197] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.788200] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.376 [2024-10-06 11:23:58.788208] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.788211] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.788214] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.376 [2024-10-06 11:23:58.788220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.376 [2024-10-06 11:23:58.788228] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.376 [2024-10-06 11:23:58.788298] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.376 [2024-10-06 11:23:58.788304] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.376 [2024-10-06 11:23:58.788307] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.788310] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.376 [2024-10-06 11:23:58.788318] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.788321] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.788324] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.376 [2024-10-06 11:23:58.788330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.376 [2024-10-06 11:23:58.788338] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.376 [2024-10-06 11:23:58.788409] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.376 [2024-10-06 11:23:58.788414] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.376 [2024-10-06 11:23:58.788417] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.788420] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.376 [2024-10-06 11:23:58.788428] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.788431] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.788436] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.376 [2024-10-06 11:23:58.788442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.376 [2024-10-06 11:23:58.788451] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.376 [2024-10-06 11:23:58.788525] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.376 [2024-10-06 11:23:58.788530] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.376 [2024-10-06 11:23:58.788533] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.788536] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.376 [2024-10-06 11:23:58.788544] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.788548] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.788550] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.376 [2024-10-06 11:23:58.788556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.376 [2024-10-06 11:23:58.788565] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.376 [2024-10-06 11:23:58.788638] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.376 [2024-10-06 11:23:58.788644] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.376 [2024-10-06 11:23:58.788647] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.788650] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.376 [2024-10-06 11:23:58.788658] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.788661] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.788664] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.376 [2024-10-06 11:23:58.788670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.376 [2024-10-06 11:23:58.788678] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.376 [2024-10-06 11:23:58.792069] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.376 [2024-10-06 11:23:58.792077] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.376 [2024-10-06 11:23:58.792080] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.792084] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.376 [2024-10-06 11:23:58.792094] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.792098] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.792101] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159ead0) 00:29:01.376 [2024-10-06 11:23:58.792107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.376 [2024-10-06 11:23:58.792117] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f4780, cid 3, qid 0 00:29:01.376 [2024-10-06 11:23:58.792259] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.376 [2024-10-06 11:23:58.792265] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.376 [2024-10-06 11:23:58.792268] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.376 [2024-10-06 11:23:58.792271] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f4780) on tqpair=0x159ead0 00:29:01.376 [2024-10-06 11:23:58.792277] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:29:01.376 00:29:01.376 11:23:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:01.376 [2024-10-06 11:23:58.827631] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:29:01.376 [2024-10-06 11:23:58.827672] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181073 ] 00:29:01.376 [2024-10-06 11:23:58.854877] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:01.376 [2024-10-06 11:23:58.854920] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:01.376 [2024-10-06 11:23:58.854925] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:01.376 [2024-10-06 11:23:58.854938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:01.376 [2024-10-06 11:23:58.854946] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:01.377 [2024-10-06 11:23:58.855298] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:01.377 [2024-10-06 11:23:58.855321] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x23f0ad0 0 00:29:01.377 [2024-10-06 11:23:58.862077] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:01.377 [2024-10-06 11:23:58.862092] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:01.377 [2024-10-06 11:23:58.862096] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:01.377 [2024-10-06 11:23:58.862099] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:01.377 [2024-10-06 11:23:58.862125] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.862130] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.862133] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23f0ad0) 00:29:01.377 [2024-10-06 11:23:58.862144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:01.377 [2024-10-06 11:23:58.862160] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446300, cid 0, qid 0 00:29:01.377 [2024-10-06 11:23:58.869067] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.377 [2024-10-06 11:23:58.869075] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.377 [2024-10-06 11:23:58.869079] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.869083] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446300) on tqpair=0x23f0ad0 00:29:01.377 [2024-10-06 11:23:58.869093] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:01.377 [2024-10-06 11:23:58.869099] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:01.377 [2024-10-06 11:23:58.869103] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:01.377 [2024-10-06 11:23:58.869113] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.869117] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.869121] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23f0ad0) 00:29:01.377 [2024-10-06 11:23:58.869127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.377 [2024-10-06 11:23:58.869140] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446300, cid 0, qid 0 00:29:01.377 [2024-10-06 11:23:58.869334] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.377 [2024-10-06 11:23:58.869343] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.377 [2024-10-06 11:23:58.869347] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.869350] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446300) on tqpair=0x23f0ad0 00:29:01.377 [2024-10-06 11:23:58.869355] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:01.377 [2024-10-06 11:23:58.869363] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:01.377 [2024-10-06 11:23:58.869369] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.869373] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.869376] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23f0ad0) 00:29:01.377 [2024-10-06 11:23:58.869382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.377 [2024-10-06 11:23:58.869393] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446300, cid 0, qid 0 00:29:01.377 [2024-10-06 11:23:58.869496] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.377 [2024-10-06 11:23:58.869502] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.377 [2024-10-06 11:23:58.869517] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.869521] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446300) on tqpair=0x23f0ad0 00:29:01.377 [2024-10-06 11:23:58.869525] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:01.377 [2024-10-06 11:23:58.869533] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:01.377 [2024-10-06 11:23:58.869539] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.869542] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.869545] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23f0ad0) 00:29:01.377 [2024-10-06 11:23:58.869551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.377 [2024-10-06 11:23:58.869561] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446300, cid 0, qid 0 00:29:01.377 [2024-10-06 11:23:58.869659] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.377 [2024-10-06 11:23:58.869664] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.377 [2024-10-06 11:23:58.869668] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.869671] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446300) on tqpair=0x23f0ad0 00:29:01.377 [2024-10-06 11:23:58.869675] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:01.377 [2024-10-06 11:23:58.869684] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.869688] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.869691] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23f0ad0) 00:29:01.377 [2024-10-06 11:23:58.869696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.377 [2024-10-06 11:23:58.869707] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446300, cid 0, qid 0 00:29:01.377 [2024-10-06 11:23:58.869790] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.377 [2024-10-06 11:23:58.869796] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.377 [2024-10-06 11:23:58.869799] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.869803] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446300) on tqpair=0x23f0ad0 00:29:01.377 [2024-10-06 11:23:58.869809] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:01.377 [2024-10-06 11:23:58.869813] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:01.377 [2024-10-06 11:23:58.869820] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:01.377 [2024-10-06 11:23:58.869925] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:01.377 [2024-10-06 11:23:58.869929] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:01.377 [2024-10-06 11:23:58.869935] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.869939] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.869942] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23f0ad0) 00:29:01.377 [2024-10-06 11:23:58.869948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.377 [2024-10-06 11:23:58.869957] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446300, cid 0, qid 0 00:29:01.377 [2024-10-06 11:23:58.870037] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.377 [2024-10-06 11:23:58.870042] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.377 [2024-10-06 11:23:58.870045] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.870049] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446300) on tqpair=0x23f0ad0 00:29:01.377 [2024-10-06 11:23:58.870053] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:01.377 [2024-10-06 11:23:58.870069] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.870073] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.870076] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23f0ad0) 00:29:01.377 [2024-10-06 11:23:58.870082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.377 [2024-10-06 11:23:58.870092] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446300, cid 0, qid 0 00:29:01.377 [2024-10-06 11:23:58.870172] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.377 [2024-10-06 11:23:58.870178] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.377 [2024-10-06 11:23:58.870182] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.377 [2024-10-06 11:23:58.870185] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446300) on tqpair=0x23f0ad0 00:29:01.377 [2024-10-06 11:23:58.870188] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:01.378 [2024-10-06 11:23:58.870192] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:01.378 [2024-10-06 11:23:58.870199] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:01.378 [2024-10-06 11:23:58.870210] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:01.378 [2024-10-06 11:23:58.870217] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.870220] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23f0ad0) 00:29:01.378 [2024-10-06 11:23:58.870226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.378 [2024-10-06 11:23:58.870255] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446300, cid 0, qid 0 00:29:01.378 [2024-10-06 11:23:58.870380] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:01.378 [2024-10-06 11:23:58.870386] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:01.378 [2024-10-06 11:23:58.870389] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.870393] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23f0ad0): datao=0, datal=4096, cccid=0 00:29:01.378 [2024-10-06 11:23:58.870397] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2446300) on tqpair(0x23f0ad0): expected_datao=0, payload_size=4096 00:29:01.378 [2024-10-06 11:23:58.870401] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.870423] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.870427] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914065] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.378 [2024-10-06 11:23:58.914076] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.378 [2024-10-06 11:23:58.914079] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914083] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446300) on tqpair=0x23f0ad0 00:29:01.378 [2024-10-06 11:23:58.914089] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:01.378 [2024-10-06 11:23:58.914094] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:01.378 [2024-10-06 11:23:58.914098] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:01.378 [2024-10-06 11:23:58.914102] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:01.378 [2024-10-06 11:23:58.914106] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:01.378 [2024-10-06 11:23:58.914110] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:01.378 [2024-10-06 11:23:58.914119] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:01.378 [2024-10-06 11:23:58.914125] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914128] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914132] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23f0ad0) 00:29:01.378 [2024-10-06 11:23:58.914138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:01.378 [2024-10-06 11:23:58.914150] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446300, cid 0, qid 0 00:29:01.378 [2024-10-06 11:23:58.914292] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.378 [2024-10-06 11:23:58.914297] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.378 [2024-10-06 11:23:58.914301] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914304] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446300) on tqpair=0x23f0ad0 00:29:01.378 [2024-10-06 11:23:58.914310] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914313] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914316] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23f0ad0) 00:29:01.378 [2024-10-06 11:23:58.914321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.378 [2024-10-06 11:23:58.914327] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914330] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914336] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x23f0ad0) 00:29:01.378 [2024-10-06 11:23:58.914341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.378 [2024-10-06 11:23:58.914346] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914349] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914352] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x23f0ad0) 00:29:01.378 [2024-10-06 11:23:58.914357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.378 [2024-10-06 11:23:58.914362] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914366] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914369] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.378 [2024-10-06 11:23:58.914374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.378 [2024-10-06 11:23:58.914378] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:01.378 [2024-10-06 11:23:58.914388] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:01.378 [2024-10-06 11:23:58.914394] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914397] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23f0ad0) 00:29:01.378 [2024-10-06 11:23:58.914403] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.378 [2024-10-06 11:23:58.914414] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446300, cid 0, qid 0 00:29:01.378 [2024-10-06 11:23:58.914419] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446480, cid 1, qid 0 00:29:01.378 [2024-10-06 11:23:58.914423] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446600, cid 2, qid 0 00:29:01.378 [2024-10-06 11:23:58.914427] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.378 [2024-10-06 11:23:58.914431] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446900, cid 4, qid 0 00:29:01.378 [2024-10-06 11:23:58.914587] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.378 [2024-10-06 11:23:58.914593] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.378 [2024-10-06 11:23:58.914597] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914600] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446900) on tqpair=0x23f0ad0 00:29:01.378 [2024-10-06 11:23:58.914604] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:01.378 [2024-10-06 11:23:58.914608] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:01.378 [2024-10-06 11:23:58.914616] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:01.378 [2024-10-06 11:23:58.914623] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:01.378 [2024-10-06 11:23:58.914629] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914632] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914635] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23f0ad0) 00:29:01.378 [2024-10-06 11:23:58.914641] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:01.378 [2024-10-06 11:23:58.914653] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446900, cid 4, qid 0 00:29:01.378 [2024-10-06 11:23:58.914774] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.378 [2024-10-06 11:23:58.914780] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.378 [2024-10-06 11:23:58.914783] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914787] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446900) on tqpair=0x23f0ad0 00:29:01.378 [2024-10-06 11:23:58.914836] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:01.378 [2024-10-06 11:23:58.914845] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:01.378 [2024-10-06 11:23:58.914851] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914855] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23f0ad0) 00:29:01.378 [2024-10-06 11:23:58.914860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.378 [2024-10-06 11:23:58.914870] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446900, cid 4, qid 0 00:29:01.378 [2024-10-06 11:23:58.914959] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:01.378 [2024-10-06 11:23:58.914964] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:01.378 [2024-10-06 11:23:58.914968] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.914971] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23f0ad0): datao=0, datal=4096, cccid=4 00:29:01.378 [2024-10-06 11:23:58.914975] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2446900) on tqpair(0x23f0ad0): expected_datao=0, payload_size=4096 00:29:01.378 [2024-10-06 11:23:58.914979] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.915020] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:01.378 [2024-10-06 11:23:58.915024] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:01.649 [2024-10-06 11:23:58.955242] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.649 [2024-10-06 11:23:58.955254] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.649 [2024-10-06 11:23:58.955257] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.649 [2024-10-06 11:23:58.955261] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446900) on tqpair=0x23f0ad0 00:29:01.649 [2024-10-06 11:23:58.955270] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:01.649 [2024-10-06 11:23:58.955280] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:01.649 [2024-10-06 11:23:58.955290] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:01.649 [2024-10-06 11:23:58.955297] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.649 [2024-10-06 11:23:58.955300] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23f0ad0) 00:29:01.649 [2024-10-06 11:23:58.955306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.649 [2024-10-06 11:23:58.955318] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446900, cid 4, qid 0 00:29:01.649 [2024-10-06 11:23:58.955418] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:01.649 [2024-10-06 11:23:58.955424] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:01.649 [2024-10-06 11:23:58.955427] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:01.649 [2024-10-06 11:23:58.955430] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23f0ad0): datao=0, datal=4096, cccid=4 00:29:01.649 [2024-10-06 11:23:58.955440] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2446900) on tqpair(0x23f0ad0): expected_datao=0, payload_size=4096 00:29:01.649 [2024-10-06 11:23:58.955444] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.649 [2024-10-06 11:23:58.955480] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:01.649 [2024-10-06 11:23:58.955484] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:01.649 [2024-10-06 11:23:58.996225] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.649 [2024-10-06 11:23:58.996237] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.649 [2024-10-06 11:23:58.996241] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.649 [2024-10-06 11:23:58.996244] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446900) on tqpair=0x23f0ad0 00:29:01.649 [2024-10-06 11:23:58.996259] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:01.649 [2024-10-06 11:23:58.996270] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:01.649 [2024-10-06 11:23:58.996277] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.649 [2024-10-06 11:23:58.996281] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23f0ad0) 00:29:01.649 [2024-10-06 11:23:58.996288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.649 [2024-10-06 11:23:58.996300] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446900, cid 4, qid 0 00:29:01.649 [2024-10-06 11:23:58.996395] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:01.649 [2024-10-06 11:23:58.996402] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:01.649 [2024-10-06 11:23:58.996405] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:01.649 [2024-10-06 11:23:58.996408] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23f0ad0): datao=0, datal=4096, cccid=4 00:29:01.649 [2024-10-06 11:23:58.996412] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2446900) on tqpair(0x23f0ad0): expected_datao=0, payload_size=4096 00:29:01.649 [2024-10-06 11:23:58.996416] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.649 [2024-10-06 11:23:58.996422] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:01.649 [2024-10-06 11:23:58.996426] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:01.649 [2024-10-06 11:23:58.996447] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.649 [2024-10-06 11:23:58.996453] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.649 [2024-10-06 11:23:58.996456] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.649 [2024-10-06 11:23:58.996460] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446900) on tqpair=0x23f0ad0 00:29:01.649 [2024-10-06 11:23:58.996467] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:01.649 [2024-10-06 11:23:58.996474] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:01.649 [2024-10-06 11:23:58.996482] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:01.649 [2024-10-06 11:23:58.996487] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:01.649 [2024-10-06 11:23:58.996492] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:01.650 [2024-10-06 11:23:58.996497] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:01.650 [2024-10-06 11:23:58.996501] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:01.650 [2024-10-06 11:23:58.996508] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:01.650 [2024-10-06 11:23:58.996512] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:01.650 [2024-10-06 11:23:58.996525] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.996528] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23f0ad0) 00:29:01.650 [2024-10-06 11:23:58.996535] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.650 [2024-10-06 11:23:58.996541] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.996544] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.996547] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23f0ad0) 00:29:01.650 [2024-10-06 11:23:58.996553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.650 [2024-10-06 11:23:58.996565] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446900, cid 4, qid 0 00:29:01.650 [2024-10-06 11:23:58.996569] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446a80, cid 5, qid 0 00:29:01.650 [2024-10-06 11:23:58.996660] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.650 [2024-10-06 11:23:58.996666] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.650 [2024-10-06 11:23:58.996669] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.996672] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446900) on tqpair=0x23f0ad0 00:29:01.650 [2024-10-06 11:23:58.996678] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.650 [2024-10-06 11:23:58.996683] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.650 [2024-10-06 11:23:58.996686] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.996690] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446a80) on tqpair=0x23f0ad0 00:29:01.650 [2024-10-06 11:23:58.996699] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.996703] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23f0ad0) 00:29:01.650 [2024-10-06 11:23:58.996708] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.650 [2024-10-06 11:23:58.996717] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446a80, cid 5, qid 0 00:29:01.650 [2024-10-06 11:23:58.996794] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.650 [2024-10-06 11:23:58.996800] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.650 [2024-10-06 11:23:58.996803] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.996806] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446a80) on tqpair=0x23f0ad0 00:29:01.650 [2024-10-06 11:23:58.996814] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.996818] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23f0ad0) 00:29:01.650 [2024-10-06 11:23:58.996823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.650 [2024-10-06 11:23:58.996832] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446a80, cid 5, qid 0 00:29:01.650 [2024-10-06 11:23:58.996908] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.650 [2024-10-06 11:23:58.996914] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.650 [2024-10-06 11:23:58.996917] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.996921] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446a80) on tqpair=0x23f0ad0 00:29:01.650 [2024-10-06 11:23:58.996930] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.996934] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23f0ad0) 00:29:01.650 [2024-10-06 11:23:58.996940] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.650 [2024-10-06 11:23:58.996949] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446a80, cid 5, qid 0 00:29:01.650 [2024-10-06 11:23:58.997019] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.650 [2024-10-06 11:23:58.997024] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.650 [2024-10-06 11:23:58.997027] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997031] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446a80) on tqpair=0x23f0ad0 00:29:01.650 [2024-10-06 11:23:58.997043] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997048] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23f0ad0) 00:29:01.650 [2024-10-06 11:23:58.997053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.650 [2024-10-06 11:23:58.997068] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997072] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23f0ad0) 00:29:01.650 [2024-10-06 11:23:58.997078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.650 [2024-10-06 11:23:58.997084] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997088] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x23f0ad0) 00:29:01.650 [2024-10-06 11:23:58.997093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.650 [2024-10-06 11:23:58.997101] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997104] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x23f0ad0) 00:29:01.650 [2024-10-06 11:23:58.997110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.650 [2024-10-06 11:23:58.997120] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446a80, cid 5, qid 0 00:29:01.650 [2024-10-06 11:23:58.997125] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446900, cid 4, qid 0 00:29:01.650 [2024-10-06 11:23:58.997129] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446c00, cid 6, qid 0 00:29:01.650 [2024-10-06 11:23:58.997134] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446d80, cid 7, qid 0 00:29:01.650 [2024-10-06 11:23:58.997284] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:01.650 [2024-10-06 11:23:58.997290] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:01.650 [2024-10-06 11:23:58.997294] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997297] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23f0ad0): datao=0, datal=8192, cccid=5 00:29:01.650 [2024-10-06 11:23:58.997301] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2446a80) on tqpair(0x23f0ad0): expected_datao=0, payload_size=8192 00:29:01.650 [2024-10-06 11:23:58.997305] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997351] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997355] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997364] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:01.650 [2024-10-06 11:23:58.997371] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:01.650 [2024-10-06 11:23:58.997374] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997377] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23f0ad0): datao=0, datal=512, cccid=4 00:29:01.650 [2024-10-06 11:23:58.997381] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2446900) on tqpair(0x23f0ad0): expected_datao=0, payload_size=512 00:29:01.650 [2024-10-06 11:23:58.997386] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997391] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997395] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997400] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:01.650 [2024-10-06 11:23:58.997405] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:01.650 [2024-10-06 11:23:58.997408] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997411] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23f0ad0): datao=0, datal=512, cccid=6 00:29:01.650 [2024-10-06 11:23:58.997415] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2446c00) on tqpair(0x23f0ad0): expected_datao=0, payload_size=512 00:29:01.650 [2024-10-06 11:23:58.997419] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997424] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997427] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997432] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:01.650 [2024-10-06 11:23:58.997437] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:01.650 [2024-10-06 11:23:58.997440] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997444] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23f0ad0): datao=0, datal=4096, cccid=7 00:29:01.650 [2024-10-06 11:23:58.997447] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2446d80) on tqpair(0x23f0ad0): expected_datao=0, payload_size=4096 00:29:01.650 [2024-10-06 11:23:58.997451] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997457] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997460] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997468] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.650 [2024-10-06 11:23:58.997473] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.650 [2024-10-06 11:23:58.997476] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997480] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446a80) on tqpair=0x23f0ad0 00:29:01.650 [2024-10-06 11:23:58.997489] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.650 [2024-10-06 11:23:58.997495] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.650 [2024-10-06 11:23:58.997498] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997502] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446900) on tqpair=0x23f0ad0 00:29:01.650 [2024-10-06 11:23:58.997510] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.650 [2024-10-06 11:23:58.997516] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.650 [2024-10-06 11:23:58.997519] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997523] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446c00) on tqpair=0x23f0ad0 00:29:01.650 [2024-10-06 11:23:58.997528] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.650 [2024-10-06 11:23:58.997533] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.650 [2024-10-06 11:23:58.997536] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.650 [2024-10-06 11:23:58.997540] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446d80) on tqpair=0x23f0ad0 00:29:01.650 ===================================================== 00:29:01.650 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:01.650 ===================================================== 00:29:01.650 Controller Capabilities/Features 00:29:01.650 ================================ 00:29:01.650 Vendor ID: 8086 00:29:01.650 Subsystem Vendor ID: 8086 00:29:01.650 Serial Number: SPDK00000000000001 00:29:01.650 Model Number: SPDK bdev Controller 00:29:01.650 Firmware Version: 25.01 00:29:01.650 Recommended Arb Burst: 6 00:29:01.650 IEEE OUI Identifier: e4 d2 5c 00:29:01.650 Multi-path I/O 00:29:01.651 May have multiple subsystem ports: Yes 00:29:01.651 May have multiple controllers: Yes 00:29:01.651 Associated with SR-IOV VF: No 00:29:01.651 Max Data Transfer Size: 131072 00:29:01.651 Max Number of Namespaces: 32 00:29:01.651 Max Number of I/O Queues: 127 00:29:01.651 NVMe Specification Version (VS): 1.3 00:29:01.651 NVMe Specification Version (Identify): 1.3 00:29:01.651 Maximum Queue Entries: 128 00:29:01.651 Contiguous Queues Required: Yes 00:29:01.651 Arbitration Mechanisms Supported 00:29:01.651 Weighted Round Robin: Not Supported 00:29:01.651 Vendor Specific: Not Supported 00:29:01.651 Reset Timeout: 15000 ms 00:29:01.651 Doorbell Stride: 4 bytes 00:29:01.651 NVM Subsystem Reset: Not Supported 00:29:01.651 Command Sets Supported 00:29:01.651 NVM Command Set: Supported 00:29:01.651 Boot Partition: Not Supported 00:29:01.651 Memory Page Size Minimum: 4096 bytes 00:29:01.651 Memory Page Size Maximum: 4096 bytes 00:29:01.651 Persistent Memory Region: Not Supported 00:29:01.651 Optional Asynchronous Events Supported 00:29:01.651 Namespace Attribute Notices: Supported 00:29:01.651 Firmware Activation Notices: Not Supported 00:29:01.651 ANA Change Notices: Not Supported 00:29:01.651 PLE Aggregate Log Change Notices: Not Supported 00:29:01.651 LBA Status Info Alert Notices: Not Supported 00:29:01.651 EGE Aggregate Log Change Notices: Not Supported 00:29:01.651 Normal NVM Subsystem Shutdown event: Not Supported 00:29:01.651 Zone Descriptor Change Notices: Not Supported 00:29:01.651 Discovery Log Change Notices: Not Supported 00:29:01.651 Controller Attributes 00:29:01.651 128-bit Host Identifier: Supported 00:29:01.651 Non-Operational Permissive Mode: Not Supported 00:29:01.651 NVM Sets: Not Supported 00:29:01.651 Read Recovery Levels: Not Supported 00:29:01.651 Endurance Groups: Not Supported 00:29:01.651 Predictable Latency Mode: Not Supported 00:29:01.651 Traffic Based Keep ALive: Not Supported 00:29:01.651 Namespace Granularity: Not Supported 00:29:01.651 SQ Associations: Not Supported 00:29:01.651 UUID List: Not Supported 00:29:01.651 Multi-Domain Subsystem: Not Supported 00:29:01.651 Fixed Capacity Management: Not Supported 00:29:01.651 Variable Capacity Management: Not Supported 00:29:01.651 Delete Endurance Group: Not Supported 00:29:01.651 Delete NVM Set: Not Supported 00:29:01.651 Extended LBA Formats Supported: Not Supported 00:29:01.651 Flexible Data Placement Supported: Not Supported 00:29:01.651 00:29:01.651 Controller Memory Buffer Support 00:29:01.651 ================================ 00:29:01.651 Supported: No 00:29:01.651 00:29:01.651 Persistent Memory Region Support 00:29:01.651 ================================ 00:29:01.651 Supported: No 00:29:01.651 00:29:01.651 Admin Command Set Attributes 00:29:01.651 ============================ 00:29:01.651 Security Send/Receive: Not Supported 00:29:01.651 Format NVM: Not Supported 00:29:01.651 Firmware Activate/Download: Not Supported 00:29:01.651 Namespace Management: Not Supported 00:29:01.651 Device Self-Test: Not Supported 00:29:01.651 Directives: Not Supported 00:29:01.651 NVMe-MI: Not Supported 00:29:01.651 Virtualization Management: Not Supported 00:29:01.651 Doorbell Buffer Config: Not Supported 00:29:01.651 Get LBA Status Capability: Not Supported 00:29:01.651 Command & Feature Lockdown Capability: Not Supported 00:29:01.651 Abort Command Limit: 4 00:29:01.651 Async Event Request Limit: 4 00:29:01.651 Number of Firmware Slots: N/A 00:29:01.651 Firmware Slot 1 Read-Only: N/A 00:29:01.651 Firmware Activation Without Reset: N/A 00:29:01.651 Multiple Update Detection Support: N/A 00:29:01.651 Firmware Update Granularity: No Information Provided 00:29:01.651 Per-Namespace SMART Log: No 00:29:01.651 Asymmetric Namespace Access Log Page: Not Supported 00:29:01.651 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:01.651 Command Effects Log Page: Supported 00:29:01.651 Get Log Page Extended Data: Supported 00:29:01.651 Telemetry Log Pages: Not Supported 00:29:01.651 Persistent Event Log Pages: Not Supported 00:29:01.651 Supported Log Pages Log Page: May Support 00:29:01.651 Commands Supported & Effects Log Page: Not Supported 00:29:01.651 Feature Identifiers & Effects Log Page:May Support 00:29:01.651 NVMe-MI Commands & Effects Log Page: May Support 00:29:01.651 Data Area 4 for Telemetry Log: Not Supported 00:29:01.651 Error Log Page Entries Supported: 128 00:29:01.651 Keep Alive: Supported 00:29:01.651 Keep Alive Granularity: 10000 ms 00:29:01.651 00:29:01.651 NVM Command Set Attributes 00:29:01.651 ========================== 00:29:01.651 Submission Queue Entry Size 00:29:01.651 Max: 64 00:29:01.651 Min: 64 00:29:01.651 Completion Queue Entry Size 00:29:01.651 Max: 16 00:29:01.651 Min: 16 00:29:01.651 Number of Namespaces: 32 00:29:01.651 Compare Command: Supported 00:29:01.651 Write Uncorrectable Command: Not Supported 00:29:01.651 Dataset Management Command: Supported 00:29:01.651 Write Zeroes Command: Supported 00:29:01.651 Set Features Save Field: Not Supported 00:29:01.651 Reservations: Supported 00:29:01.651 Timestamp: Not Supported 00:29:01.651 Copy: Supported 00:29:01.651 Volatile Write Cache: Present 00:29:01.651 Atomic Write Unit (Normal): 1 00:29:01.651 Atomic Write Unit (PFail): 1 00:29:01.651 Atomic Compare & Write Unit: 1 00:29:01.651 Fused Compare & Write: Supported 00:29:01.651 Scatter-Gather List 00:29:01.651 SGL Command Set: Supported 00:29:01.651 SGL Keyed: Supported 00:29:01.651 SGL Bit Bucket Descriptor: Not Supported 00:29:01.651 SGL Metadata Pointer: Not Supported 00:29:01.651 Oversized SGL: Not Supported 00:29:01.651 SGL Metadata Address: Not Supported 00:29:01.651 SGL Offset: Supported 00:29:01.651 Transport SGL Data Block: Not Supported 00:29:01.651 Replay Protected Memory Block: Not Supported 00:29:01.651 00:29:01.651 Firmware Slot Information 00:29:01.651 ========================= 00:29:01.651 Active slot: 1 00:29:01.651 Slot 1 Firmware Revision: 25.01 00:29:01.651 00:29:01.651 00:29:01.651 Commands Supported and Effects 00:29:01.651 ============================== 00:29:01.651 Admin Commands 00:29:01.651 -------------- 00:29:01.651 Get Log Page (02h): Supported 00:29:01.651 Identify (06h): Supported 00:29:01.651 Abort (08h): Supported 00:29:01.651 Set Features (09h): Supported 00:29:01.651 Get Features (0Ah): Supported 00:29:01.651 Asynchronous Event Request (0Ch): Supported 00:29:01.651 Keep Alive (18h): Supported 00:29:01.651 I/O Commands 00:29:01.651 ------------ 00:29:01.651 Flush (00h): Supported LBA-Change 00:29:01.651 Write (01h): Supported LBA-Change 00:29:01.651 Read (02h): Supported 00:29:01.651 Compare (05h): Supported 00:29:01.651 Write Zeroes (08h): Supported LBA-Change 00:29:01.651 Dataset Management (09h): Supported LBA-Change 00:29:01.651 Copy (19h): Supported LBA-Change 00:29:01.651 00:29:01.651 Error Log 00:29:01.651 ========= 00:29:01.651 00:29:01.651 Arbitration 00:29:01.651 =========== 00:29:01.651 Arbitration Burst: 1 00:29:01.651 00:29:01.651 Power Management 00:29:01.651 ================ 00:29:01.651 Number of Power States: 1 00:29:01.651 Current Power State: Power State #0 00:29:01.651 Power State #0: 00:29:01.651 Max Power: 0.00 W 00:29:01.651 Non-Operational State: Operational 00:29:01.651 Entry Latency: Not Reported 00:29:01.651 Exit Latency: Not Reported 00:29:01.651 Relative Read Throughput: 0 00:29:01.651 Relative Read Latency: 0 00:29:01.651 Relative Write Throughput: 0 00:29:01.651 Relative Write Latency: 0 00:29:01.651 Idle Power: Not Reported 00:29:01.651 Active Power: Not Reported 00:29:01.651 Non-Operational Permissive Mode: Not Supported 00:29:01.651 00:29:01.651 Health Information 00:29:01.651 ================== 00:29:01.651 Critical Warnings: 00:29:01.651 Available Spare Space: OK 00:29:01.651 Temperature: OK 00:29:01.651 Device Reliability: OK 00:29:01.651 Read Only: No 00:29:01.651 Volatile Memory Backup: OK 00:29:01.651 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:01.651 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:01.651 Available Spare: 0% 00:29:01.651 Available Spare Threshold: 0% 00:29:01.651 Life Percentage Used:[2024-10-06 11:23:58.997621] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.651 [2024-10-06 11:23:58.997626] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x23f0ad0) 00:29:01.651 [2024-10-06 11:23:58.997632] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.651 [2024-10-06 11:23:58.997644] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446d80, cid 7, qid 0 00:29:01.651 [2024-10-06 11:23:59.001067] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.651 [2024-10-06 11:23:59.001076] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.651 [2024-10-06 11:23:59.001079] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.651 [2024-10-06 11:23:59.001083] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446d80) on tqpair=0x23f0ad0 00:29:01.651 [2024-10-06 11:23:59.001113] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:01.651 [2024-10-06 11:23:59.001122] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446300) on tqpair=0x23f0ad0 00:29:01.651 [2024-10-06 11:23:59.001128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.651 [2024-10-06 11:23:59.001133] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446480) on tqpair=0x23f0ad0 00:29:01.651 [2024-10-06 11:23:59.001137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.651 [2024-10-06 11:23:59.001141] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446600) on tqpair=0x23f0ad0 00:29:01.651 [2024-10-06 11:23:59.001145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.651 [2024-10-06 11:23:59.001150] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.651 [2024-10-06 11:23:59.001154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.651 [2024-10-06 11:23:59.001162] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.651 [2024-10-06 11:23:59.001165] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001168] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.652 [2024-10-06 11:23:59.001174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.652 [2024-10-06 11:23:59.001187] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.652 [2024-10-06 11:23:59.001329] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.652 [2024-10-06 11:23:59.001335] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.652 [2024-10-06 11:23:59.001338] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001342] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.652 [2024-10-06 11:23:59.001347] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001351] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001354] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.652 [2024-10-06 11:23:59.001360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.652 [2024-10-06 11:23:59.001373] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.652 [2024-10-06 11:23:59.001466] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.652 [2024-10-06 11:23:59.001472] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.652 [2024-10-06 11:23:59.001475] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001481] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.652 [2024-10-06 11:23:59.001485] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:01.652 [2024-10-06 11:23:59.001489] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:01.652 [2024-10-06 11:23:59.001497] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001501] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001504] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.652 [2024-10-06 11:23:59.001509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.652 [2024-10-06 11:23:59.001519] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.652 [2024-10-06 11:23:59.001591] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.652 [2024-10-06 11:23:59.001597] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.652 [2024-10-06 11:23:59.001600] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001604] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.652 [2024-10-06 11:23:59.001612] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001616] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001619] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.652 [2024-10-06 11:23:59.001625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.652 [2024-10-06 11:23:59.001634] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.652 [2024-10-06 11:23:59.001703] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.652 [2024-10-06 11:23:59.001709] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.652 [2024-10-06 11:23:59.001712] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001716] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.652 [2024-10-06 11:23:59.001723] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001727] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001731] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.652 [2024-10-06 11:23:59.001736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.652 [2024-10-06 11:23:59.001745] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.652 [2024-10-06 11:23:59.001820] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.652 [2024-10-06 11:23:59.001826] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.652 [2024-10-06 11:23:59.001830] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001833] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.652 [2024-10-06 11:23:59.001841] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001844] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001848] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.652 [2024-10-06 11:23:59.001853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.652 [2024-10-06 11:23:59.001862] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.652 [2024-10-06 11:23:59.001932] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.652 [2024-10-06 11:23:59.001939] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.652 [2024-10-06 11:23:59.001943] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001946] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.652 [2024-10-06 11:23:59.001955] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001958] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.001961] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.652 [2024-10-06 11:23:59.001967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.652 [2024-10-06 11:23:59.001976] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.652 [2024-10-06 11:23:59.002046] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.652 [2024-10-06 11:23:59.002052] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.652 [2024-10-06 11:23:59.002055] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002066] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.652 [2024-10-06 11:23:59.002074] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002078] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002081] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.652 [2024-10-06 11:23:59.002086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.652 [2024-10-06 11:23:59.002096] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.652 [2024-10-06 11:23:59.002174] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.652 [2024-10-06 11:23:59.002180] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.652 [2024-10-06 11:23:59.002183] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002186] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.652 [2024-10-06 11:23:59.002194] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002198] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002201] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.652 [2024-10-06 11:23:59.002207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.652 [2024-10-06 11:23:59.002216] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.652 [2024-10-06 11:23:59.002290] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.652 [2024-10-06 11:23:59.002296] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.652 [2024-10-06 11:23:59.002300] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002303] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.652 [2024-10-06 11:23:59.002311] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002314] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002318] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.652 [2024-10-06 11:23:59.002323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.652 [2024-10-06 11:23:59.002333] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.652 [2024-10-06 11:23:59.002400] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.652 [2024-10-06 11:23:59.002407] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.652 [2024-10-06 11:23:59.002412] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002416] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.652 [2024-10-06 11:23:59.002423] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002427] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002431] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.652 [2024-10-06 11:23:59.002436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.652 [2024-10-06 11:23:59.002445] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.652 [2024-10-06 11:23:59.002518] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.652 [2024-10-06 11:23:59.002524] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.652 [2024-10-06 11:23:59.002527] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002530] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.652 [2024-10-06 11:23:59.002538] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002542] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002545] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.652 [2024-10-06 11:23:59.002551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.652 [2024-10-06 11:23:59.002560] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.652 [2024-10-06 11:23:59.002631] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.652 [2024-10-06 11:23:59.002637] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.652 [2024-10-06 11:23:59.002640] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002644] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.652 [2024-10-06 11:23:59.002652] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002655] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002659] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.652 [2024-10-06 11:23:59.002664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.652 [2024-10-06 11:23:59.002673] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.652 [2024-10-06 11:23:59.002748] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.652 [2024-10-06 11:23:59.002754] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.652 [2024-10-06 11:23:59.002757] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002761] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.652 [2024-10-06 11:23:59.002768] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002772] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.652 [2024-10-06 11:23:59.002775] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.653 [2024-10-06 11:23:59.002781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.653 [2024-10-06 11:23:59.002790] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.653 [2024-10-06 11:23:59.002859] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.653 [2024-10-06 11:23:59.002866] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.653 [2024-10-06 11:23:59.002869] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.002874] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.653 [2024-10-06 11:23:59.002883] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.002886] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.002889] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.653 [2024-10-06 11:23:59.002895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.653 [2024-10-06 11:23:59.002904] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.653 [2024-10-06 11:23:59.002977] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.653 [2024-10-06 11:23:59.002982] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.653 [2024-10-06 11:23:59.002986] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.002989] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.653 [2024-10-06 11:23:59.002997] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003001] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003004] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.653 [2024-10-06 11:23:59.003010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.653 [2024-10-06 11:23:59.003019] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.653 [2024-10-06 11:23:59.003091] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.653 [2024-10-06 11:23:59.003097] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.653 [2024-10-06 11:23:59.003101] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003104] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.653 [2024-10-06 11:23:59.003112] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003116] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003119] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.653 [2024-10-06 11:23:59.003125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.653 [2024-10-06 11:23:59.003134] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.653 [2024-10-06 11:23:59.003209] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.653 [2024-10-06 11:23:59.003215] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.653 [2024-10-06 11:23:59.003218] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003221] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.653 [2024-10-06 11:23:59.003229] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003233] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003236] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.653 [2024-10-06 11:23:59.003242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.653 [2024-10-06 11:23:59.003251] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.653 [2024-10-06 11:23:59.003325] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.653 [2024-10-06 11:23:59.003331] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.653 [2024-10-06 11:23:59.003334] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003337] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.653 [2024-10-06 11:23:59.003347] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003351] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003354] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.653 [2024-10-06 11:23:59.003360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.653 [2024-10-06 11:23:59.003369] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.653 [2024-10-06 11:23:59.003444] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.653 [2024-10-06 11:23:59.003450] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.653 [2024-10-06 11:23:59.003453] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003457] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.653 [2024-10-06 11:23:59.003464] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003468] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003471] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.653 [2024-10-06 11:23:59.003477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.653 [2024-10-06 11:23:59.003486] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.653 [2024-10-06 11:23:59.003566] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.653 [2024-10-06 11:23:59.003571] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.653 [2024-10-06 11:23:59.003574] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003578] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.653 [2024-10-06 11:23:59.003586] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003590] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003593] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.653 [2024-10-06 11:23:59.003599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.653 [2024-10-06 11:23:59.003608] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.653 [2024-10-06 11:23:59.003680] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.653 [2024-10-06 11:23:59.003685] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.653 [2024-10-06 11:23:59.003689] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003692] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.653 [2024-10-06 11:23:59.003700] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003704] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.003707] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.653 [2024-10-06 11:23:59.003712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.653 [2024-10-06 11:23:59.003721] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.653 [2024-10-06 11:23:59.007067] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.653 [2024-10-06 11:23:59.007076] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.653 [2024-10-06 11:23:59.007079] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.007083] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.653 [2024-10-06 11:23:59.007093] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.007100] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.007103] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23f0ad0) 00:29:01.653 [2024-10-06 11:23:59.007109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.653 [2024-10-06 11:23:59.007121] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2446780, cid 3, qid 0 00:29:01.653 [2024-10-06 11:23:59.007272] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:01.653 [2024-10-06 11:23:59.007278] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:01.653 [2024-10-06 11:23:59.007281] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:01.653 [2024-10-06 11:23:59.007285] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2446780) on tqpair=0x23f0ad0 00:29:01.653 [2024-10-06 11:23:59.007291] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:29:01.653 0% 00:29:01.653 Data Units Read: 0 00:29:01.653 Data Units Written: 0 00:29:01.653 Host Read Commands: 0 00:29:01.653 Host Write Commands: 0 00:29:01.653 Controller Busy Time: 0 minutes 00:29:01.653 Power Cycles: 0 00:29:01.653 Power On Hours: 0 hours 00:29:01.653 Unsafe Shutdowns: 0 00:29:01.653 Unrecoverable Media Errors: 0 00:29:01.653 Lifetime Error Log Entries: 0 00:29:01.653 Warning Temperature Time: 0 minutes 00:29:01.653 Critical Temperature Time: 0 minutes 00:29:01.653 00:29:01.653 Number of Queues 00:29:01.653 ================ 00:29:01.653 Number of I/O Submission Queues: 127 00:29:01.653 Number of I/O Completion Queues: 127 00:29:01.653 00:29:01.653 Active Namespaces 00:29:01.653 ================= 00:29:01.653 Namespace ID:1 00:29:01.653 Error Recovery Timeout: Unlimited 00:29:01.653 Command Set Identifier: NVM (00h) 00:29:01.653 Deallocate: Supported 00:29:01.653 Deallocated/Unwritten Error: Not Supported 00:29:01.653 Deallocated Read Value: Unknown 00:29:01.653 Deallocate in Write Zeroes: Not Supported 00:29:01.654 Deallocated Guard Field: 0xFFFF 00:29:01.654 Flush: Supported 00:29:01.654 Reservation: Supported 00:29:01.654 Namespace Sharing Capabilities: Multiple Controllers 00:29:01.654 Size (in LBAs): 131072 (0GiB) 00:29:01.654 Capacity (in LBAs): 131072 (0GiB) 00:29:01.654 Utilization (in LBAs): 131072 (0GiB) 00:29:01.654 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:01.654 EUI64: ABCDEF0123456789 00:29:01.654 UUID: 9e1d88f7-b7bc-4688-9a96-1c1c850f56ec 00:29:01.654 Thin Provisioning: Not Supported 00:29:01.654 Per-NS Atomic Units: Yes 00:29:01.654 Atomic Boundary Size (Normal): 0 00:29:01.654 Atomic Boundary Size (PFail): 0 00:29:01.654 Atomic Boundary Offset: 0 00:29:01.654 Maximum Single Source Range Length: 65535 00:29:01.654 Maximum Copy Length: 65535 00:29:01.654 Maximum Source Range Count: 1 00:29:01.654 NGUID/EUI64 Never Reused: No 00:29:01.654 Namespace Write Protected: No 00:29:01.654 Number of LBA Formats: 1 00:29:01.654 Current LBA Format: LBA Format #00 00:29:01.654 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:01.654 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:01.654 rmmod nvme_tcp 00:29:01.654 rmmod nvme_fabrics 00:29:01.654 rmmod nvme_keyring 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 2180854 ']' 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 2180854 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2180854 ']' 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2180854 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2180854 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2180854' 00:29:01.654 killing process with pid 2180854 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2180854 00:29:01.654 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2180854 00:29:02.019 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:02.019 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:02.019 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:02.019 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:02.019 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:29:02.019 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:02.019 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:29:02.019 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:02.019 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:02.019 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.019 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.019 11:23:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:04.074 00:29:04.074 real 0m8.801s 00:29:04.074 user 0m5.377s 00:29:04.074 sys 0m4.485s 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:04.074 ************************************ 00:29:04.074 END TEST nvmf_identify 00:29:04.074 ************************************ 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.074 ************************************ 00:29:04.074 START TEST nvmf_perf 00:29:04.074 ************************************ 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:04.074 * Looking for test storage... 00:29:04.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:04.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.074 --rc genhtml_branch_coverage=1 00:29:04.074 --rc genhtml_function_coverage=1 00:29:04.074 --rc genhtml_legend=1 00:29:04.074 --rc geninfo_all_blocks=1 00:29:04.074 --rc geninfo_unexecuted_blocks=1 00:29:04.074 00:29:04.074 ' 00:29:04.074 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:04.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.075 --rc genhtml_branch_coverage=1 00:29:04.075 --rc genhtml_function_coverage=1 00:29:04.075 --rc genhtml_legend=1 00:29:04.075 --rc geninfo_all_blocks=1 00:29:04.075 --rc geninfo_unexecuted_blocks=1 00:29:04.075 00:29:04.075 ' 00:29:04.075 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:04.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.075 --rc genhtml_branch_coverage=1 00:29:04.075 --rc genhtml_function_coverage=1 00:29:04.075 --rc genhtml_legend=1 00:29:04.075 --rc geninfo_all_blocks=1 00:29:04.075 --rc geninfo_unexecuted_blocks=1 00:29:04.075 00:29:04.075 ' 00:29:04.075 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:04.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.075 --rc genhtml_branch_coverage=1 00:29:04.075 --rc genhtml_function_coverage=1 00:29:04.075 --rc genhtml_legend=1 00:29:04.075 --rc geninfo_all_blocks=1 00:29:04.075 --rc geninfo_unexecuted_blocks=1 00:29:04.075 00:29:04.075 ' 00:29:04.075 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:04.075 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:04.075 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.075 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.075 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.075 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.075 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:04.075 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:04.075 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.075 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:04.075 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.075 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:04.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:04.334 11:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:09.611 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:09.612 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:09.612 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:09.612 Found net devices under 0000:af:00.0: cvl_0_0 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:09.612 Found net devices under 0000:af:00.1: cvl_0_1 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:09.612 11:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:09.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:29:09.612 00:29:09.612 --- 10.0.0.2 ping statistics --- 00:29:09.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.612 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:09.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:29:09.612 00:29:09.612 --- 10.0.0.1 ping statistics --- 00:29:09.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.612 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=2184581 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 2184581 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2184581 ']' 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:09.612 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:09.612 [2024-10-06 11:24:07.108481] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:29:09.612 [2024-10-06 11:24:07.108532] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.612 [2024-10-06 11:24:07.167475] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:09.872 [2024-10-06 11:24:07.209328] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.872 [2024-10-06 11:24:07.209370] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.872 [2024-10-06 11:24:07.209382] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.872 [2024-10-06 11:24:07.209388] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.872 [2024-10-06 11:24:07.209393] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.872 [2024-10-06 11:24:07.210804] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.872 [2024-10-06 11:24:07.210906] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:29:09.872 [2024-10-06 11:24:07.211015] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:29:09.872 [2024-10-06 11:24:07.211016] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.872 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:09.872 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:29:09.872 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:09.872 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:09.872 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:09.872 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.872 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:09.872 11:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:13.161 11:24:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:13.161 11:24:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:13.161 11:24:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:29:13.161 11:24:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:13.421 11:24:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:13.421 11:24:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:29:13.421 11:24:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:13.421 11:24:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:13.421 11:24:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:13.421 [2024-10-06 11:24:10.975249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.679 11:24:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:13.679 11:24:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:13.679 11:24:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:13.939 11:24:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:13.939 11:24:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:14.198 11:24:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:14.457 [2024-10-06 11:24:11.795557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.457 11:24:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:14.457 11:24:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:29:14.457 11:24:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:14.457 11:24:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:14.457 11:24:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:15.838 Initializing NVMe Controllers 00:29:15.838 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:29:15.838 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:29:15.838 Initialization complete. Launching workers. 00:29:15.838 ======================================================== 00:29:15.838 Latency(us) 00:29:15.838 Device Information : IOPS MiB/s Average min max 00:29:15.838 PCIE (0000:5e:00.0) NSID 1 from core 0: 99812.40 389.89 320.30 14.90 4654.88 00:29:15.838 ======================================================== 00:29:15.838 Total : 99812.40 389.89 320.30 14.90 4654.88 00:29:15.838 00:29:15.838 11:24:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:17.217 Initializing NVMe Controllers 00:29:17.217 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:17.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:17.217 Initialization complete. Launching workers. 00:29:17.217 ======================================================== 00:29:17.217 Latency(us) 00:29:17.217 Device Information : IOPS MiB/s Average min max 00:29:17.217 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 109.00 0.43 9208.88 146.96 45307.06 00:29:17.217 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 65.00 0.25 15933.54 6909.50 47949.87 00:29:17.217 ======================================================== 00:29:17.217 Total : 174.00 0.68 11720.97 146.96 47949.87 00:29:17.217 00:29:17.217 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.596 Initializing NVMe Controllers 00:29:18.596 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:18.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:18.596 Initialization complete. Launching workers. 00:29:18.596 ======================================================== 00:29:18.596 Latency(us) 00:29:18.596 Device Information : IOPS MiB/s Average min max 00:29:18.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10628.71 41.52 3012.29 437.05 6381.13 00:29:18.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3866.90 15.11 8310.49 5777.85 16185.21 00:29:18.596 ======================================================== 00:29:18.596 Total : 14495.61 56.62 4425.65 437.05 16185.21 00:29:18.596 00:29:18.596 11:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:18.596 11:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:18.596 11:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:21.132 Initializing NVMe Controllers 00:29:21.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:21.132 Controller IO queue size 128, less than required. 00:29:21.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:21.132 Controller IO queue size 128, less than required. 00:29:21.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:21.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:21.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:21.132 Initialization complete. Launching workers. 00:29:21.132 ======================================================== 00:29:21.132 Latency(us) 00:29:21.132 Device Information : IOPS MiB/s Average min max 00:29:21.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1193.66 298.41 109789.59 54648.22 165358.67 00:29:21.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 575.59 143.90 230164.22 93375.75 326445.12 00:29:21.132 ======================================================== 00:29:21.132 Total : 1769.25 442.31 148951.29 54648.22 326445.12 00:29:21.132 00:29:21.132 11:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:21.132 No valid NVMe controllers or AIO or URING devices found 00:29:21.132 Initializing NVMe Controllers 00:29:21.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:21.132 Controller IO queue size 128, less than required. 00:29:21.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:21.132 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:21.132 Controller IO queue size 128, less than required. 00:29:21.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:21.132 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:21.132 WARNING: Some requested NVMe devices were skipped 00:29:21.132 11:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:23.673 Initializing NVMe Controllers 00:29:23.673 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:23.673 Controller IO queue size 128, less than required. 00:29:23.673 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:23.673 Controller IO queue size 128, less than required. 00:29:23.674 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:23.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:23.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:23.674 Initialization complete. Launching workers. 00:29:23.674 00:29:23.674 ==================== 00:29:23.674 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:23.674 TCP transport: 00:29:23.674 polls: 28267 00:29:23.674 idle_polls: 11028 00:29:23.674 sock_completions: 17239 00:29:23.674 nvme_completions: 4805 00:29:23.674 submitted_requests: 7164 00:29:23.674 queued_requests: 1 00:29:23.674 00:29:23.674 ==================== 00:29:23.674 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:23.674 TCP transport: 00:29:23.674 polls: 25885 00:29:23.674 idle_polls: 9507 00:29:23.674 sock_completions: 16378 00:29:23.674 nvme_completions: 4945 00:29:23.674 submitted_requests: 7462 00:29:23.674 queued_requests: 1 00:29:23.674 ======================================================== 00:29:23.674 Latency(us) 00:29:23.674 Device Information : IOPS MiB/s Average min max 00:29:23.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1199.10 299.77 111451.50 63219.65 185074.68 00:29:23.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1234.04 308.51 104765.12 63112.13 152514.09 00:29:23.674 ======================================================== 00:29:23.674 Total : 2433.14 608.29 108060.29 63112.13 185074.68 00:29:23.674 00:29:23.674 11:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:23.674 11:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:23.932 11:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:23.932 11:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:29:23.932 11:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:27.221 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=60663e25-b042-4103-ad45-5fed1bf1b8f5 00:29:27.221 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 60663e25-b042-4103-ad45-5fed1bf1b8f5 00:29:27.221 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=60663e25-b042-4103-ad45-5fed1bf1b8f5 00:29:27.221 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:27.221 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:27.221 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:27.221 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:27.221 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:27.221 { 00:29:27.221 "uuid": "60663e25-b042-4103-ad45-5fed1bf1b8f5", 00:29:27.221 "name": "lvs_0", 00:29:27.221 "base_bdev": "Nvme0n1", 00:29:27.221 "total_data_clusters": 238234, 00:29:27.221 "free_clusters": 238234, 00:29:27.221 "block_size": 512, 00:29:27.221 "cluster_size": 4194304 00:29:27.221 } 00:29:27.221 ]' 00:29:27.221 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="60663e25-b042-4103-ad45-5fed1bf1b8f5") .free_clusters' 00:29:27.221 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:29:27.221 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="60663e25-b042-4103-ad45-5fed1bf1b8f5") .cluster_size' 00:29:27.221 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:27.221 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:29:27.221 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:29:27.221 952936 00:29:27.221 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:27.221 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:27.221 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 60663e25-b042-4103-ad45-5fed1bf1b8f5 lbd_0 20480 00:29:27.789 11:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=e68c3945-4d6d-4e7a-8da9-826a3940be20 00:29:27.789 11:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore e68c3945-4d6d-4e7a-8da9-826a3940be20 lvs_n_0 00:29:28.727 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=d3bffba4-75d5-4e01-8171-1d8ff26e3e6c 00:29:28.727 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb d3bffba4-75d5-4e01-8171-1d8ff26e3e6c 00:29:28.727 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=d3bffba4-75d5-4e01-8171-1d8ff26e3e6c 00:29:28.727 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:28.727 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:28.727 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:28.727 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:28.727 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:28.727 { 00:29:28.727 "uuid": "60663e25-b042-4103-ad45-5fed1bf1b8f5", 00:29:28.727 "name": "lvs_0", 00:29:28.727 "base_bdev": "Nvme0n1", 00:29:28.727 "total_data_clusters": 238234, 00:29:28.727 "free_clusters": 233114, 00:29:28.727 "block_size": 512, 00:29:28.727 "cluster_size": 4194304 00:29:28.727 }, 00:29:28.727 { 00:29:28.727 "uuid": "d3bffba4-75d5-4e01-8171-1d8ff26e3e6c", 00:29:28.727 "name": "lvs_n_0", 00:29:28.727 "base_bdev": "e68c3945-4d6d-4e7a-8da9-826a3940be20", 00:29:28.727 "total_data_clusters": 5114, 00:29:28.727 "free_clusters": 5114, 00:29:28.727 "block_size": 512, 00:29:28.727 "cluster_size": 4194304 00:29:28.727 } 00:29:28.727 ]' 00:29:28.727 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d3bffba4-75d5-4e01-8171-1d8ff26e3e6c") .free_clusters' 00:29:28.727 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:29:28.727 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d3bffba4-75d5-4e01-8171-1d8ff26e3e6c") .cluster_size' 00:29:28.727 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:28.727 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:29:28.727 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:29:28.727 20456 00:29:28.727 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:28.727 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d3bffba4-75d5-4e01-8171-1d8ff26e3e6c lbd_nest_0 20456 00:29:28.986 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=02923907-eed1-435f-9c5a-7b29ddc32d49 00:29:28.987 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:29.244 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:29.245 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 02923907-eed1-435f-9c5a-7b29ddc32d49 00:29:29.503 11:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:29.762 11:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:29.762 11:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:29.762 11:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:29.762 11:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:29.762 11:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:41.964 Initializing NVMe Controllers 00:29:41.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:41.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:41.964 Initialization complete. Launching workers. 00:29:41.964 ======================================================== 00:29:41.964 Latency(us) 00:29:41.964 Device Information : IOPS MiB/s Average min max 00:29:41.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.50 0.02 21976.19 165.43 47882.93 00:29:41.964 ======================================================== 00:29:41.964 Total : 45.50 0.02 21976.19 165.43 47882.93 00:29:41.964 00:29:41.964 11:24:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:41.964 11:24:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:51.942 Initializing NVMe Controllers 00:29:51.942 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:51.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:51.942 Initialization complete. Launching workers. 00:29:51.942 ======================================================== 00:29:51.942 Latency(us) 00:29:51.942 Device Information : IOPS MiB/s Average min max 00:29:51.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.60 9.70 12886.61 7137.26 51870.26 00:29:51.942 ======================================================== 00:29:51.942 Total : 77.60 9.70 12886.61 7137.26 51870.26 00:29:51.942 00:29:51.942 11:24:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:51.942 11:24:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:51.942 11:24:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:01.921 Initializing NVMe Controllers 00:30:01.922 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:01.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:01.922 Initialization complete. Launching workers. 00:30:01.922 ======================================================== 00:30:01.922 Latency(us) 00:30:01.922 Device Information : IOPS MiB/s Average min max 00:30:01.922 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8759.20 4.28 3653.19 249.36 10034.98 00:30:01.922 ======================================================== 00:30:01.922 Total : 8759.20 4.28 3653.19 249.36 10034.98 00:30:01.922 00:30:01.922 11:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:01.922 11:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:12.055 Initializing NVMe Controllers 00:30:12.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:12.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:12.055 Initialization complete. Launching workers. 00:30:12.055 ======================================================== 00:30:12.055 Latency(us) 00:30:12.055 Device Information : IOPS MiB/s Average min max 00:30:12.055 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2334.56 291.82 13707.22 767.73 31417.94 00:30:12.055 ======================================================== 00:30:12.055 Total : 2334.56 291.82 13707.22 767.73 31417.94 00:30:12.055 00:30:12.055 11:25:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:12.055 11:25:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:12.055 11:25:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:22.030 Initializing NVMe Controllers 00:30:22.030 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:22.030 Controller IO queue size 128, less than required. 00:30:22.030 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:22.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:22.030 Initialization complete. Launching workers. 00:30:22.030 ======================================================== 00:30:22.030 Latency(us) 00:30:22.030 Device Information : IOPS MiB/s Average min max 00:30:22.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15876.50 7.75 8067.34 1341.43 48670.96 00:30:22.030 ======================================================== 00:30:22.030 Total : 15876.50 7.75 8067.34 1341.43 48670.96 00:30:22.030 00:30:22.030 11:25:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:22.030 11:25:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:32.011 Initializing NVMe Controllers 00:30:32.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:32.011 Controller IO queue size 128, less than required. 00:30:32.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:32.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:32.011 Initialization complete. Launching workers. 00:30:32.011 ======================================================== 00:30:32.011 Latency(us) 00:30:32.011 Device Information : IOPS MiB/s Average min max 00:30:32.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1196.00 149.50 107423.94 15079.00 214776.82 00:30:32.011 ======================================================== 00:30:32.011 Total : 1196.00 149.50 107423.94 15079.00 214776.82 00:30:32.011 00:30:32.011 11:25:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:32.011 11:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 02923907-eed1-435f-9c5a-7b29ddc32d49 00:30:32.270 11:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:32.529 11:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e68c3945-4d6d-4e7a-8da9-826a3940be20 00:30:32.788 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:33.047 rmmod nvme_tcp 00:30:33.047 rmmod nvme_fabrics 00:30:33.047 rmmod nvme_keyring 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 2184581 ']' 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 2184581 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2184581 ']' 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2184581 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2184581 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2184581' 00:30:33.047 killing process with pid 2184581 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2184581 00:30:33.047 11:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2184581 00:30:34.954 11:25:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:34.954 11:25:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:34.954 11:25:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:34.954 11:25:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:30:34.954 11:25:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:30:34.954 11:25:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:30:34.954 11:25:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:34.954 11:25:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:34.954 11:25:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:34.954 11:25:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.954 11:25:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.954 11:25:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.861 11:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:36.861 00:30:36.861 real 1m32.658s 00:30:36.861 user 5m33.068s 00:30:36.861 sys 0m15.663s 00:30:36.861 11:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:36.861 11:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:36.861 ************************************ 00:30:36.861 END TEST nvmf_perf 00:30:36.861 ************************************ 00:30:36.861 11:25:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:36.861 11:25:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:36.861 11:25:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:36.861 11:25:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.861 ************************************ 00:30:36.861 START TEST nvmf_fio_host 00:30:36.861 ************************************ 00:30:36.861 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:36.861 * Looking for test storage... 00:30:36.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:36.861 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:36.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.862 --rc genhtml_branch_coverage=1 00:30:36.862 --rc genhtml_function_coverage=1 00:30:36.862 --rc genhtml_legend=1 00:30:36.862 --rc geninfo_all_blocks=1 00:30:36.862 --rc geninfo_unexecuted_blocks=1 00:30:36.862 00:30:36.862 ' 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:36.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.862 --rc genhtml_branch_coverage=1 00:30:36.862 --rc genhtml_function_coverage=1 00:30:36.862 --rc genhtml_legend=1 00:30:36.862 --rc geninfo_all_blocks=1 00:30:36.862 --rc geninfo_unexecuted_blocks=1 00:30:36.862 00:30:36.862 ' 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:36.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.862 --rc genhtml_branch_coverage=1 00:30:36.862 --rc genhtml_function_coverage=1 00:30:36.862 --rc genhtml_legend=1 00:30:36.862 --rc geninfo_all_blocks=1 00:30:36.862 --rc geninfo_unexecuted_blocks=1 00:30:36.862 00:30:36.862 ' 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:36.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.862 --rc genhtml_branch_coverage=1 00:30:36.862 --rc genhtml_function_coverage=1 00:30:36.862 --rc genhtml_legend=1 00:30:36.862 --rc geninfo_all_blocks=1 00:30:36.862 --rc geninfo_unexecuted_blocks=1 00:30:36.862 00:30:36.862 ' 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.862 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:36.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:30:36.863 11:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:42.137 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:42.137 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:42.137 Found net devices under 0000:af:00.0: cvl_0_0 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.137 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:42.138 Found net devices under 0000:af:00.1: cvl_0_1 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:42.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:30:42.138 00:30:42.138 --- 10.0.0.2 ping statistics --- 00:30:42.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.138 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:30:42.138 00:30:42.138 --- 10.0.0.1 ping statistics --- 00:30:42.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.138 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2201724 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2201724 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2201724 ']' 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:42.138 [2024-10-06 11:25:39.491893] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:30:42.138 [2024-10-06 11:25:39.491934] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.138 [2024-10-06 11:25:39.548643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:42.138 [2024-10-06 11:25:39.588357] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.138 [2024-10-06 11:25:39.588392] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.138 [2024-10-06 11:25:39.588399] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:42.138 [2024-10-06 11:25:39.588405] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:42.138 [2024-10-06 11:25:39.588414] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.138 [2024-10-06 11:25:39.589897] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.138 [2024-10-06 11:25:39.589994] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:42.138 [2024-10-06 11:25:39.590012] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:30:42.138 [2024-10-06 11:25:39.590012] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:30:42.138 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:42.398 [2024-10-06 11:25:39.868925] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:42.398 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:42.398 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:42.398 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.398 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:42.657 Malloc1 00:30:42.657 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:42.916 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:43.174 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.174 [2024-10-06 11:25:40.734937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.433 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:43.433 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:43.433 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:43.434 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:43.434 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:43.434 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:43.434 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:43.434 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:43.434 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:43.434 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:43.434 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.434 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:43.434 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:43.434 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:43.434 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:43.434 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:43.434 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.434 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:43.434 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:43.434 11:25:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:43.692 11:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:43.692 11:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:43.692 11:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:43.692 11:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:43.951 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:43.951 fio-3.35 00:30:43.951 Starting 1 thread 00:30:46.485 00:30:46.485 test: (groupid=0, jobs=1): err= 0: pid=2202098: Sun Oct 6 11:25:43 2024 00:30:46.485 read: IOPS=11.7k, BW=45.9MiB/s (48.1MB/s)(92.0MiB/2005msec) 00:30:46.485 slat (nsec): min=1531, max=241333, avg=1728.21, stdev=2298.80 00:30:46.485 clat (usec): min=3157, max=10336, avg=6038.80, stdev=447.31 00:30:46.485 lat (usec): min=3194, max=10337, avg=6040.53, stdev=447.24 00:30:46.485 clat percentiles (usec): 00:30:46.485 | 1.00th=[ 5014], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:30:46.485 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6128], 00:30:46.485 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6718], 00:30:46.485 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 8586], 99.95th=[ 9110], 00:30:46.485 | 99.99th=[10290] 00:30:46.485 bw ( KiB/s): min=45800, max=47528, per=99.97%, avg=46964.00, stdev=811.58, samples=4 00:30:46.485 iops : min=11450, max=11882, avg=11741.00, stdev=202.90, samples=4 00:30:46.485 write: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(91.4MiB/2005msec); 0 zone resets 00:30:46.485 slat (nsec): min=1565, max=237713, avg=1787.89, stdev=1720.00 00:30:46.485 clat (usec): min=2463, max=9679, avg=4846.05, stdev=378.30 00:30:46.485 lat (usec): min=2478, max=9681, avg=4847.84, stdev=378.29 00:30:46.485 clat percentiles (usec): 00:30:46.485 | 1.00th=[ 3982], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:30:46.485 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:30:46.485 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:30:46.485 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 7963], 99.95th=[ 8717], 00:30:46.485 | 99.99th=[ 9241] 00:30:46.485 bw ( KiB/s): min=46272, max=47264, per=99.99%, avg=46674.00, stdev=485.58, samples=4 00:30:46.485 iops : min=11568, max=11816, avg=11668.50, stdev=121.40, samples=4 00:30:46.485 lat (msec) : 4=0.59%, 10=99.40%, 20=0.01% 00:30:46.485 cpu : usr=67.76%, sys=28.64%, ctx=101, majf=0, minf=4 00:30:46.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:46.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:46.485 issued rwts: total=23548,23398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:46.485 00:30:46.485 Run status group 0 (all jobs): 00:30:46.485 READ: bw=45.9MiB/s (48.1MB/s), 45.9MiB/s-45.9MiB/s (48.1MB/s-48.1MB/s), io=92.0MiB (96.5MB), run=2005-2005msec 00:30:46.485 WRITE: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=91.4MiB (95.8MB), run=2005-2005msec 00:30:46.485 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:46.485 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:46.485 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:46.485 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:46.485 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:46.485 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:46.485 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:46.485 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:46.485 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.485 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:46.485 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:46.485 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:46.485 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:46.485 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:46.485 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.485 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:46.486 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:46.486 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:46.486 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:46.486 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:46.486 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:46.486 11:25:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:46.486 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:46.486 fio-3.35 00:30:46.486 Starting 1 thread 00:30:49.024 00:30:49.024 test: (groupid=0, jobs=1): err= 0: pid=2202659: Sun Oct 6 11:25:46 2024 00:30:49.024 read: IOPS=10.8k, BW=169MiB/s (177MB/s)(338MiB/2004msec) 00:30:49.024 slat (nsec): min=2513, max=91437, avg=2801.60, stdev=1313.94 00:30:49.024 clat (usec): min=1633, max=13384, avg=7062.96, stdev=1668.69 00:30:49.024 lat (usec): min=1636, max=13386, avg=7065.76, stdev=1668.84 00:30:49.024 clat percentiles (usec): 00:30:49.024 | 1.00th=[ 3687], 5.00th=[ 4490], 10.00th=[ 4883], 20.00th=[ 5604], 00:30:49.024 | 30.00th=[ 6128], 40.00th=[ 6587], 50.00th=[ 7046], 60.00th=[ 7439], 00:30:49.024 | 70.00th=[ 7898], 80.00th=[ 8455], 90.00th=[ 9110], 95.00th=[ 9896], 00:30:49.024 | 99.00th=[11469], 99.50th=[11863], 99.90th=[12387], 99.95th=[13042], 00:30:49.024 | 99.99th=[13304] 00:30:49.024 bw ( KiB/s): min=78016, max=94880, per=49.76%, avg=85992.00, stdev=6935.63, samples=4 00:30:49.024 iops : min= 4876, max= 5930, avg=5374.50, stdev=433.48, samples=4 00:30:49.024 write: IOPS=6285, BW=98.2MiB/s (103MB/s)(175MiB/1785msec); 0 zone resets 00:30:49.024 slat (usec): min=29, max=380, avg=31.46, stdev= 7.68 00:30:49.024 clat (usec): min=1677, max=14571, avg=8566.07, stdev=1526.78 00:30:49.024 lat (usec): min=1709, max=14681, avg=8597.53, stdev=1528.37 00:30:49.024 clat percentiles (usec): 00:30:49.024 | 1.00th=[ 5800], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 7308], 00:30:49.024 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8717], 00:30:49.024 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10814], 95.00th=[11469], 00:30:49.024 | 99.00th=[12649], 99.50th=[13042], 99.90th=[14222], 99.95th=[14353], 00:30:49.024 | 99.99th=[14615] 00:30:49.024 bw ( KiB/s): min=83168, max=98656, per=89.04%, avg=89552.00, stdev=6547.71, samples=4 00:30:49.024 iops : min= 5198, max= 6166, avg=5597.00, stdev=409.23, samples=4 00:30:49.024 lat (msec) : 2=0.02%, 4=1.35%, 10=89.72%, 20=8.91% 00:30:49.024 cpu : usr=85.62%, sys=13.03%, ctx=29, majf=0, minf=4 00:30:49.024 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:30:49.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:49.024 issued rwts: total=21645,11220,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:49.024 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:49.024 00:30:49.024 Run status group 0 (all jobs): 00:30:49.024 READ: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=338MiB (355MB), run=2004-2004msec 00:30:49.024 WRITE: bw=98.2MiB/s (103MB/s), 98.2MiB/s-98.2MiB/s (103MB/s-103MB/s), io=175MiB (184MB), run=1785-1785msec 00:30:49.024 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:49.024 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:49.024 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:49.024 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:49.024 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:30:49.024 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:30:49.024 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:49.024 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:49.024 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:30:49.024 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:30:49.024 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:30:49.024 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:30:52.313 Nvme0n1 00:30:52.313 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:54.846 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=37b68cb0-f813-461c-a042-6e4ed3e2b814 00:30:54.846 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 37b68cb0-f813-461c-a042-6e4ed3e2b814 00:30:54.846 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=37b68cb0-f813-461c-a042-6e4ed3e2b814 00:30:54.846 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:54.846 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:54.846 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:54.846 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:55.106 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:55.106 { 00:30:55.106 "uuid": "37b68cb0-f813-461c-a042-6e4ed3e2b814", 00:30:55.106 "name": "lvs_0", 00:30:55.106 "base_bdev": "Nvme0n1", 00:30:55.106 "total_data_clusters": 930, 00:30:55.106 "free_clusters": 930, 00:30:55.106 "block_size": 512, 00:30:55.106 "cluster_size": 1073741824 00:30:55.106 } 00:30:55.106 ]' 00:30:55.106 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="37b68cb0-f813-461c-a042-6e4ed3e2b814") .free_clusters' 00:30:55.106 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:30:55.106 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="37b68cb0-f813-461c-a042-6e4ed3e2b814") .cluster_size' 00:30:55.106 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:30:55.106 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:30:55.106 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:30:55.106 952320 00:30:55.106 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:55.673 2b19f6ee-1e35-4151-b911-457cfbeedf8f 00:30:55.673 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:55.673 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:55.931 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:56.189 11:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:56.447 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:56.447 fio-3.35 00:30:56.447 Starting 1 thread 00:30:58.981 00:30:58.981 test: (groupid=0, jobs=1): err= 0: pid=2204360: Sun Oct 6 11:25:56 2024 00:30:58.981 read: IOPS=7995, BW=31.2MiB/s (32.7MB/s)(62.6MiB/2006msec) 00:30:58.981 slat (nsec): min=1526, max=100508, avg=1673.23, stdev=1089.36 00:30:58.981 clat (usec): min=860, max=170367, avg=8849.18, stdev=10326.19 00:30:58.981 lat (usec): min=861, max=170386, avg=8850.85, stdev=10326.34 00:30:58.981 clat percentiles (msec): 00:30:58.981 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:30:58.981 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:58.981 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:30:58.981 | 99.00th=[ 10], 99.50th=[ 14], 99.90th=[ 171], 99.95th=[ 171], 00:30:58.981 | 99.99th=[ 171] 00:30:58.981 bw ( KiB/s): min=22880, max=35120, per=99.85%, avg=31932.00, stdev=6036.23, samples=4 00:30:58.981 iops : min= 5720, max= 8780, avg=7983.00, stdev=1509.06, samples=4 00:30:58.981 write: IOPS=7970, BW=31.1MiB/s (32.6MB/s)(62.5MiB/2006msec); 0 zone resets 00:30:58.981 slat (nsec): min=1564, max=76611, avg=1742.44, stdev=713.05 00:30:58.981 clat (usec): min=176, max=168600, avg=7103.82, stdev=9640.00 00:30:58.981 lat (usec): min=177, max=168604, avg=7105.56, stdev=9640.15 00:30:58.981 clat percentiles (msec): 00:30:58.981 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:30:58.981 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:30:58.981 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:30:58.981 | 99.00th=[ 8], 99.50th=[ 9], 99.90th=[ 169], 99.95th=[ 169], 00:30:58.981 | 99.99th=[ 169] 00:30:58.981 bw ( KiB/s): min=23776, max=34752, per=99.95%, avg=31866.00, stdev=5395.52, samples=4 00:30:58.981 iops : min= 5944, max= 8688, avg=7966.50, stdev=1348.88, samples=4 00:30:58.981 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:30:58.981 lat (msec) : 2=0.05%, 4=0.23%, 10=99.10%, 20=0.19%, 250=0.40% 00:30:58.981 cpu : usr=67.23%, sys=30.52%, ctx=114, majf=0, minf=4 00:30:58.981 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:58.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:58.981 issued rwts: total=16038,15988,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.981 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:58.981 00:30:58.981 Run status group 0 (all jobs): 00:30:58.981 READ: bw=31.2MiB/s (32.7MB/s), 31.2MiB/s-31.2MiB/s (32.7MB/s-32.7MB/s), io=62.6MiB (65.7MB), run=2006-2006msec 00:30:58.981 WRITE: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=62.5MiB (65.5MB), run=2006-2006msec 00:30:58.981 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:58.981 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:00.359 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=95a7c580-aa36-43e6-a65a-d6cb45cb76f5 00:31:00.359 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 95a7c580-aa36-43e6-a65a-d6cb45cb76f5 00:31:00.359 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=95a7c580-aa36-43e6-a65a-d6cb45cb76f5 00:31:00.359 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:00.359 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:00.359 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:00.359 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:00.359 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:00.359 { 00:31:00.359 "uuid": "37b68cb0-f813-461c-a042-6e4ed3e2b814", 00:31:00.359 "name": "lvs_0", 00:31:00.359 "base_bdev": "Nvme0n1", 00:31:00.359 "total_data_clusters": 930, 00:31:00.359 "free_clusters": 0, 00:31:00.359 "block_size": 512, 00:31:00.359 "cluster_size": 1073741824 00:31:00.359 }, 00:31:00.359 { 00:31:00.359 "uuid": "95a7c580-aa36-43e6-a65a-d6cb45cb76f5", 00:31:00.359 "name": "lvs_n_0", 00:31:00.359 "base_bdev": "2b19f6ee-1e35-4151-b911-457cfbeedf8f", 00:31:00.359 "total_data_clusters": 237847, 00:31:00.359 "free_clusters": 237847, 00:31:00.359 "block_size": 512, 00:31:00.359 "cluster_size": 4194304 00:31:00.359 } 00:31:00.359 ]' 00:31:00.359 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="95a7c580-aa36-43e6-a65a-d6cb45cb76f5") .free_clusters' 00:31:00.359 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:31:00.359 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="95a7c580-aa36-43e6-a65a-d6cb45cb76f5") .cluster_size' 00:31:00.359 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:00.359 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:31:00.359 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:31:00.359 951388 00:31:00.359 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:00.925 0543fa55-2940-413e-a2d2-7bb384111254 00:31:00.925 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:01.183 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:01.442 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:01.701 11:25:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:01.960 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:01.960 fio-3.35 00:31:01.960 Starting 1 thread 00:31:04.493 00:31:04.493 test: (groupid=0, jobs=1): err= 0: pid=2205373: Sun Oct 6 11:26:01 2024 00:31:04.493 read: IOPS=7752, BW=30.3MiB/s (31.8MB/s)(60.8MiB/2006msec) 00:31:04.493 slat (nsec): min=1535, max=113653, avg=1687.80, stdev=1157.40 00:31:04.493 clat (usec): min=3106, max=16133, avg=9140.89, stdev=786.70 00:31:04.493 lat (usec): min=3110, max=16134, avg=9142.58, stdev=786.65 00:31:04.493 clat percentiles (usec): 00:31:04.493 | 1.00th=[ 7373], 5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[ 8455], 00:31:04.493 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9372], 00:31:04.494 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10290], 00:31:04.494 | 99.00th=[10945], 99.50th=[11207], 99.90th=[13304], 99.95th=[14091], 00:31:04.494 | 99.99th=[15270] 00:31:04.494 bw ( KiB/s): min=29752, max=31592, per=99.78%, avg=30944.00, stdev=843.94, samples=4 00:31:04.494 iops : min= 7438, max= 7898, avg=7736.00, stdev=210.98, samples=4 00:31:04.494 write: IOPS=7737, BW=30.2MiB/s (31.7MB/s)(60.6MiB/2006msec); 0 zone resets 00:31:04.494 slat (nsec): min=1592, max=83011, avg=1757.23, stdev=729.84 00:31:04.494 clat (usec): min=1461, max=13107, avg=7314.39, stdev=660.47 00:31:04.494 lat (usec): min=1465, max=13109, avg=7316.15, stdev=660.44 00:31:04.494 clat percentiles (usec): 00:31:04.494 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6783], 00:31:04.494 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7504], 00:31:04.494 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8291], 00:31:04.494 | 99.00th=[ 8848], 99.50th=[ 8979], 99.90th=[11338], 99.95th=[12387], 00:31:04.494 | 99.99th=[12518] 00:31:04.494 bw ( KiB/s): min=30864, max=31104, per=100.00%, avg=30948.00, stdev=106.43, samples=4 00:31:04.494 iops : min= 7716, max= 7776, avg=7737.00, stdev=26.61, samples=4 00:31:04.494 lat (msec) : 2=0.01%, 4=0.09%, 10=94.07%, 20=5.83% 00:31:04.494 cpu : usr=68.68%, sys=28.93%, ctx=49, majf=0, minf=4 00:31:04.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:04.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:04.494 issued rwts: total=15552,15521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.494 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:04.494 00:31:04.494 Run status group 0 (all jobs): 00:31:04.494 READ: bw=30.3MiB/s (31.8MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=60.8MiB (63.7MB), run=2006-2006msec 00:31:04.494 WRITE: bw=30.2MiB/s (31.7MB/s), 30.2MiB/s-30.2MiB/s (31.7MB/s-31.7MB/s), io=60.6MiB (63.6MB), run=2006-2006msec 00:31:04.494 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:04.494 11:26:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:04.752 11:26:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:08.945 11:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:08.945 11:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:11.478 11:26:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:11.736 11:26:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:13.641 rmmod nvme_tcp 00:31:13.641 rmmod nvme_fabrics 00:31:13.641 rmmod nvme_keyring 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 2201724 ']' 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 2201724 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2201724 ']' 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2201724 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2201724 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2201724' 00:31:13.641 killing process with pid 2201724 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2201724 00:31:13.641 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2201724 00:31:13.641 11:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:13.641 11:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:13.641 11:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:13.641 11:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:13.641 11:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:13.641 11:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:31:13.641 11:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:31:13.641 11:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:13.641 11:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:13.641 11:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.641 11:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.641 11:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:16.180 00:31:16.180 real 0m39.045s 00:31:16.180 user 2m39.593s 00:31:16.180 sys 0m8.420s 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.180 ************************************ 00:31:16.180 END TEST nvmf_fio_host 00:31:16.180 ************************************ 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.180 ************************************ 00:31:16.180 START TEST nvmf_failover 00:31:16.180 ************************************ 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:16.180 * Looking for test storage... 00:31:16.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:16.180 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:16.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.180 --rc genhtml_branch_coverage=1 00:31:16.181 --rc genhtml_function_coverage=1 00:31:16.181 --rc genhtml_legend=1 00:31:16.181 --rc geninfo_all_blocks=1 00:31:16.181 --rc geninfo_unexecuted_blocks=1 00:31:16.181 00:31:16.181 ' 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:16.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.181 --rc genhtml_branch_coverage=1 00:31:16.181 --rc genhtml_function_coverage=1 00:31:16.181 --rc genhtml_legend=1 00:31:16.181 --rc geninfo_all_blocks=1 00:31:16.181 --rc geninfo_unexecuted_blocks=1 00:31:16.181 00:31:16.181 ' 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:16.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.181 --rc genhtml_branch_coverage=1 00:31:16.181 --rc genhtml_function_coverage=1 00:31:16.181 --rc genhtml_legend=1 00:31:16.181 --rc geninfo_all_blocks=1 00:31:16.181 --rc geninfo_unexecuted_blocks=1 00:31:16.181 00:31:16.181 ' 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:16.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.181 --rc genhtml_branch_coverage=1 00:31:16.181 --rc genhtml_function_coverage=1 00:31:16.181 --rc genhtml_legend=1 00:31:16.181 --rc geninfo_all_blocks=1 00:31:16.181 --rc geninfo_unexecuted_blocks=1 00:31:16.181 00:31:16.181 ' 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:16.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:16.181 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:21.459 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:21.459 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:21.459 Found net devices under 0000:af:00.0: cvl_0_0 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:21.459 Found net devices under 0000:af:00.1: cvl_0_1 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:21.459 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.459 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.459 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.459 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:21.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:31:21.719 00:31:21.719 --- 10.0.0.2 ping statistics --- 00:31:21.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.719 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:31:21.719 00:31:21.719 --- 10.0.0.1 ping statistics --- 00:31:21.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.719 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=2210612 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 2210612 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2210612 ']' 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:21.719 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:21.719 [2024-10-06 11:26:19.142090] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:31:21.719 [2024-10-06 11:26:19.142131] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.719 [2024-10-06 11:26:19.198813] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:21.719 [2024-10-06 11:26:19.237422] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.719 [2024-10-06 11:26:19.237463] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.719 [2024-10-06 11:26:19.237469] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.719 [2024-10-06 11:26:19.237476] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.719 [2024-10-06 11:26:19.237481] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.719 [2024-10-06 11:26:19.238471] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:31:21.719 [2024-10-06 11:26:19.238557] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:31:21.719 [2024-10-06 11:26:19.238558] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.978 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:21.978 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:21.978 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:21.978 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:21.978 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:21.978 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.978 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:21.978 [2024-10-06 11:26:19.532530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:22.238 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:22.238 Malloc0 00:31:22.238 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:22.497 11:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:22.756 11:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.015 [2024-10-06 11:26:20.366832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.015 11:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:23.015 [2024-10-06 11:26:20.559341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:23.015 11:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:23.274 [2024-10-06 11:26:20.751932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:23.274 11:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:23.274 11:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2210857 00:31:23.274 11:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:23.274 11:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2210857 /var/tmp/bdevperf.sock 00:31:23.274 11:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2210857 ']' 00:31:23.274 11:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:23.274 11:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:23.274 11:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:23.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:23.274 11:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:23.274 11:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:23.533 11:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:23.533 11:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:23.533 11:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:24.180 NVMe0n1 00:31:24.180 11:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:24.462 00:31:24.462 11:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2211088 00:31:24.462 11:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:24.462 11:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:25.399 11:26:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:25.657 11:26:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:28.947 11:26:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:28.947 00:31:28.947 11:26:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:29.207 [2024-10-06 11:26:26.677650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 [2024-10-06 11:26:26.677913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02e0 is same with the state(6) to be set 00:31:29.207 11:26:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:32.497 11:26:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:32.497 [2024-10-06 11:26:29.898132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.497 11:26:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:33.434 11:26:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:33.694 [2024-10-06 11:26:31.113677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.113994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.114000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.114005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.114012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.114017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.114023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.114029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.114035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.114041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.694 [2024-10-06 11:26:31.114047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 [2024-10-06 11:26:31.114282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1610 is same with the state(6) to be set 00:31:33.695 11:26:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2211088 00:31:40.272 { 00:31:40.272 "results": [ 00:31:40.272 { 00:31:40.272 "job": "NVMe0n1", 00:31:40.272 "core_mask": "0x1", 00:31:40.272 "workload": "verify", 00:31:40.272 "status": "finished", 00:31:40.272 "verify_range": { 00:31:40.272 "start": 0, 00:31:40.272 "length": 16384 00:31:40.272 }, 00:31:40.272 "queue_depth": 128, 00:31:40.272 "io_size": 4096, 00:31:40.272 "runtime": 15.004359, 00:31:40.272 "iops": 11129.965631987345, 00:31:40.272 "mibps": 43.476428249950565, 00:31:40.272 "io_failed": 6613, 00:31:40.272 "io_timeout": 0, 00:31:40.272 "avg_latency_us": 11040.992121466958, 00:31:40.272 "min_latency_us": 620.2514285714286, 00:31:40.273 "max_latency_us": 21845.333333333332 00:31:40.273 } 00:31:40.273 ], 00:31:40.273 "core_count": 1 00:31:40.273 } 00:31:40.273 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2210857 00:31:40.273 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2210857 ']' 00:31:40.273 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2210857 00:31:40.273 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:31:40.273 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:40.273 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2210857 00:31:40.273 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:40.273 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:40.273 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2210857' 00:31:40.273 killing process with pid 2210857 00:31:40.273 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2210857 00:31:40.273 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2210857 00:31:40.273 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:40.273 [2024-10-06 11:26:20.813415] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:31:40.273 [2024-10-06 11:26:20.813469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2210857 ] 00:31:40.273 [2024-10-06 11:26:20.868208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.273 [2024-10-06 11:26:20.907874] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.273 Running I/O for 15 seconds... 00:31:40.273 11144.00 IOPS, 43.53 MiB/s [2024-10-06 11:26:23.016255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.273 [2024-10-06 11:26:23.016421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.273 [2024-10-06 11:26:23.016436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.273 [2024-10-06 11:26:23.016450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.273 [2024-10-06 11:26:23.016470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.273 [2024-10-06 11:26:23.016484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.273 [2024-10-06 11:26:23.016499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.273 [2024-10-06 11:26:23.016517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.273 [2024-10-06 11:26:23.016737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.273 [2024-10-06 11:26:23.016746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.016752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.016760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.016766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.016774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.016780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.016788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.016795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.016803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.016809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.016817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.016823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.016832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.016839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.016846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.016852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.016860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.016867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.016875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.016881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.016889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.016895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.016903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.016910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.016918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.016924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.016932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.016938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.016947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.016953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.016960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.016966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.016974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.016981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.016989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.016995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.274 [2024-10-06 11:26:23.017009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.274 [2024-10-06 11:26:23.017025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.274 [2024-10-06 11:26:23.017039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.274 [2024-10-06 11:26:23.017053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.274 [2024-10-06 11:26:23.017073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.274 [2024-10-06 11:26:23.017087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.274 [2024-10-06 11:26:23.017101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.274 [2024-10-06 11:26:23.017116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.017131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.017145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.017160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.017174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.017188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.274 [2024-10-06 11:26:23.017208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.274 [2024-10-06 11:26:23.017222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.274 [2024-10-06 11:26:23.017236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.274 [2024-10-06 11:26:23.017252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.274 [2024-10-06 11:26:23.017268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.274 [2024-10-06 11:26:23.017282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.274 [2024-10-06 11:26:23.017290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.274 [2024-10-06 11:26:23.017296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.275 [2024-10-06 11:26:23.017832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.275 [2024-10-06 11:26:23.017838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.017846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.017853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.017860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.017867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.017875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.017881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.017889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.017895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.017903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.017909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.017917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.017923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.017933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.017939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.017947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.017954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.017962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.017969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.017977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.017983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.017991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.017997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.018005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.018012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.018020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.276 [2024-10-06 11:26:23.018026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.018034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.276 [2024-10-06 11:26:23.018040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.018048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.018055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.018071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.018078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.018086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.018092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.018100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.018107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.018115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.018122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.018130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.018137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.018145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:23.018151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.018158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b6ff0 is same with the state(6) to be set 00:31:40.276 [2024-10-06 11:26:23.018167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.276 [2024-10-06 11:26:23.018173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.276 [2024-10-06 11:26:23.018179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99240 len:8 PRP1 0x0 PRP2 0x0 00:31:40.276 [2024-10-06 11:26:23.018186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.018227] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14b6ff0 was disconnected and freed. reset controller. 00:31:40.276 [2024-10-06 11:26:23.018236] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:40.276 [2024-10-06 11:26:23.018259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.276 [2024-10-06 11:26:23.018267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.018274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.276 [2024-10-06 11:26:23.018281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.018288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.276 [2024-10-06 11:26:23.018294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.018301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.276 [2024-10-06 11:26:23.018307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:23.018313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:40.276 [2024-10-06 11:26:23.021082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:40.276 [2024-10-06 11:26:23.021110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14987a0 (9): Bad file descriptor 00:31:40.276 [2024-10-06 11:26:23.095684] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:40.276 10816.50 IOPS, 42.25 MiB/s 10945.67 IOPS, 42.76 MiB/s 10979.75 IOPS, 42.89 MiB/s [2024-10-06 11:26:26.679068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:26.679103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:26.679121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:26.679128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:26.679136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:26.679143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:26.679151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:26.679158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:26.679166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:26.679172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:26.679180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:26.679187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:26.679195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:26.679201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.276 [2024-10-06 11:26:26.679209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.276 [2024-10-06 11:26:26.679216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:54224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:54416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.277 [2024-10-06 11:26:26.679615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.277 [2024-10-06 11:26:26.679623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.278 [2024-10-06 11:26:26.679629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.278 [2024-10-06 11:26:26.679644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.278 [2024-10-06 11:26:26.679958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:54456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.278 [2024-10-06 11:26:26.679972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.679987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.679995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.680001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.680009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.680015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.680024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.680031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.680039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.680045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.680053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.680063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.680072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.680078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.680086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.680092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.680100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.680106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.680115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.680122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.680130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.680136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.680144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.680150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.680158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.680164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.278 [2024-10-06 11:26:26.680172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.278 [2024-10-06 11:26:26.680178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.279 [2024-10-06 11:26:26.680192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.279 [2024-10-06 11:26:26.680205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.279 [2024-10-06 11:26:26.680226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.279 [2024-10-06 11:26:26.680240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.279 [2024-10-06 11:26:26.680254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.279 [2024-10-06 11:26:26.680272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.279 [2024-10-06 11:26:26.680286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.279 [2024-10-06 11:26:26.680301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.279 [2024-10-06 11:26:26.680315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.279 [2024-10-06 11:26:26.680329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.279 [2024-10-06 11:26:26.680343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.279 [2024-10-06 11:26:26.680358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.279 [2024-10-06 11:26:26.680372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.279 [2024-10-06 11:26:26.680386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.279 [2024-10-06 11:26:26.680402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.279 [2024-10-06 11:26:26.680416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.279 [2024-10-06 11:26:26.680430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.279 [2024-10-06 11:26:26.680444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.279 [2024-10-06 11:26:26.680474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54944 len:8 PRP1 0x0 PRP2 0x0 00:31:40.279 [2024-10-06 11:26:26.680481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.279 [2024-10-06 11:26:26.680495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.279 [2024-10-06 11:26:26.680501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54952 len:8 PRP1 0x0 PRP2 0x0 00:31:40.279 [2024-10-06 11:26:26.680508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.279 [2024-10-06 11:26:26.680521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.279 [2024-10-06 11:26:26.680526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54960 len:8 PRP1 0x0 PRP2 0x0 00:31:40.279 [2024-10-06 11:26:26.680534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.279 [2024-10-06 11:26:26.680545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.279 [2024-10-06 11:26:26.680551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54968 len:8 PRP1 0x0 PRP2 0x0 00:31:40.279 [2024-10-06 11:26:26.680557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.279 [2024-10-06 11:26:26.680568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.279 [2024-10-06 11:26:26.680573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54976 len:8 PRP1 0x0 PRP2 0x0 00:31:40.279 [2024-10-06 11:26:26.680579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.279 [2024-10-06 11:26:26.680591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.279 [2024-10-06 11:26:26.680597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54984 len:8 PRP1 0x0 PRP2 0x0 00:31:40.279 [2024-10-06 11:26:26.680605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.279 [2024-10-06 11:26:26.680616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.279 [2024-10-06 11:26:26.680622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54992 len:8 PRP1 0x0 PRP2 0x0 00:31:40.279 [2024-10-06 11:26:26.680628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.279 [2024-10-06 11:26:26.680639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.279 [2024-10-06 11:26:26.680644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55000 len:8 PRP1 0x0 PRP2 0x0 00:31:40.279 [2024-10-06 11:26:26.680651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.279 [2024-10-06 11:26:26.680662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.279 [2024-10-06 11:26:26.680667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55008 len:8 PRP1 0x0 PRP2 0x0 00:31:40.279 [2024-10-06 11:26:26.680673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.279 [2024-10-06 11:26:26.680684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.279 [2024-10-06 11:26:26.680689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55016 len:8 PRP1 0x0 PRP2 0x0 00:31:40.279 [2024-10-06 11:26:26.680696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.279 [2024-10-06 11:26:26.680709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.279 [2024-10-06 11:26:26.680714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55024 len:8 PRP1 0x0 PRP2 0x0 00:31:40.279 [2024-10-06 11:26:26.680720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.279 [2024-10-06 11:26:26.680731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.279 [2024-10-06 11:26:26.680736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55032 len:8 PRP1 0x0 PRP2 0x0 00:31:40.279 [2024-10-06 11:26:26.680742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.279 [2024-10-06 11:26:26.680749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.680754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.680759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55040 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.680765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.680771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.680776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.680783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55048 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.680789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.680796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.680800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.680806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55056 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.680812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.680818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.680823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.680828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55064 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.680834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.680841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.680846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.680851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55072 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.680858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.680864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.680869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.680874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55080 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.680880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.680888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.680893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.680898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55088 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.680904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.680911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.680916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.680921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55096 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.680927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.680933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.680938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.680943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55104 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.680949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.680955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.680962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.680968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55112 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.680974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.680980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.680985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.680990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55120 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.680996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.681002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.681007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.681012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55128 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.681019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.681025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.681030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.681035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55136 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.681042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.681048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.681053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.681064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55144 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.681071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.681079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.691768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.691780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55152 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.691790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.691799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.691806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.691813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55160 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.691821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.691831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.691837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.691844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54464 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.691852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.691863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.691870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.691877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54472 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.691885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.691894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.691900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.691907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54480 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.691915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.691924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.691931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.691938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54488 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.691946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.691954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.691961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.691968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54496 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.691976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.691984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.691991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.691998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54504 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.692006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.280 [2024-10-06 11:26:26.692015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.280 [2024-10-06 11:26:26.692022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.280 [2024-10-06 11:26:26.692030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54512 len:8 PRP1 0x0 PRP2 0x0 00:31:40.280 [2024-10-06 11:26:26.692038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:26.692087] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14b9310 was disconnected and freed. reset controller. 00:31:40.281 [2024-10-06 11:26:26.692098] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:40.281 [2024-10-06 11:26:26.692123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.281 [2024-10-06 11:26:26.692133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:26.692143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.281 [2024-10-06 11:26:26.692151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:26.692163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.281 [2024-10-06 11:26:26.692172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:26.692180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.281 [2024-10-06 11:26:26.692189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:26.692198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:40.281 [2024-10-06 11:26:26.692223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14987a0 (9): Bad file descriptor 00:31:40.281 [2024-10-06 11:26:26.695950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:40.281 [2024-10-06 11:26:26.730523] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:40.281 10944.80 IOPS, 42.75 MiB/s 10989.67 IOPS, 42.93 MiB/s 11053.71 IOPS, 43.18 MiB/s 11087.62 IOPS, 43.31 MiB/s 11105.67 IOPS, 43.38 MiB/s [2024-10-06 11:26:31.116557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.281 [2024-10-06 11:26:31.116591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.116989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.116996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.281 [2024-10-06 11:26:31.117002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.117010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.281 [2024-10-06 11:26:31.117017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.117025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.281 [2024-10-06 11:26:31.117031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.281 [2024-10-06 11:26:31.117039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.282 [2024-10-06 11:26:31.117491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.282 [2024-10-06 11:26:31.117518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70624 len:8 PRP1 0x0 PRP2 0x0 00:31:40.282 [2024-10-06 11:26:31.117524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.282 [2024-10-06 11:26:31.117535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.282 [2024-10-06 11:26:31.117541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.282 [2024-10-06 11:26:31.117546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70632 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70640 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70648 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70656 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70664 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70672 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70680 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70688 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70696 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70704 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70712 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70720 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70728 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70736 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70744 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70752 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70760 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70768 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70776 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.117981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70784 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.117988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.117995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.117999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.118004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70792 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.118010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.118017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.118021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.118026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70800 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.118032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.118039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.118044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.283 [2024-10-06 11:26:31.118049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70808 len:8 PRP1 0x0 PRP2 0x0 00:31:40.283 [2024-10-06 11:26:31.118055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.283 [2024-10-06 11:26:31.118073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.283 [2024-10-06 11:26:31.118078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.118083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70816 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.118089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.118095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.118100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.118105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70824 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.118112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.118118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.118123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.118128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70832 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.118134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.118140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.118145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.118150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70840 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.118156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.118162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.118167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.118176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70848 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.118182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.118188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.118193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.118198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70856 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.118204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.118210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.118215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.118221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70864 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.118227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.118233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.118238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.118243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70872 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.118249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.118255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.118260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.118265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70880 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.118271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.118278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.118285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.118291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70888 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.118297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.118304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.118308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.118314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70896 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.118320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.118326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.118331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.118336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70904 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.118342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.118348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.118354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.118359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70912 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.118365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.118372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.118376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.118381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70920 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.118387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.118394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.118398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.130034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70928 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.130048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.130063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.130070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.130077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70936 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.130085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.130094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.130100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.130107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70944 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.130115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.130125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.130132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.130139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70952 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.130147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.130156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.130162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.130169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70960 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.130177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.130186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.130192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.130199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70968 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.130207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.130219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.130225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.130232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70976 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.130240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.130249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.130255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.130262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70984 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.130270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.130279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.284 [2024-10-06 11:26:31.130285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.284 [2024-10-06 11:26:31.130292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70992 len:8 PRP1 0x0 PRP2 0x0 00:31:40.284 [2024-10-06 11:26:31.130300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.284 [2024-10-06 11:26:31.130309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.285 [2024-10-06 11:26:31.130315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.285 [2024-10-06 11:26:31.130322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71000 len:8 PRP1 0x0 PRP2 0x0 00:31:40.285 [2024-10-06 11:26:31.130330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.285 [2024-10-06 11:26:31.130345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.285 [2024-10-06 11:26:31.130352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71008 len:8 PRP1 0x0 PRP2 0x0 00:31:40.285 [2024-10-06 11:26:31.130360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.285 [2024-10-06 11:26:31.130375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.285 [2024-10-06 11:26:31.130382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71016 len:8 PRP1 0x0 PRP2 0x0 00:31:40.285 [2024-10-06 11:26:31.130390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.285 [2024-10-06 11:26:31.130405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.285 [2024-10-06 11:26:31.130412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71024 len:8 PRP1 0x0 PRP2 0x0 00:31:40.285 [2024-10-06 11:26:31.130420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.285 [2024-10-06 11:26:31.130435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.285 [2024-10-06 11:26:31.130441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71032 len:8 PRP1 0x0 PRP2 0x0 00:31:40.285 [2024-10-06 11:26:31.130451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.285 [2024-10-06 11:26:31.130466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.285 [2024-10-06 11:26:31.130473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71040 len:8 PRP1 0x0 PRP2 0x0 00:31:40.285 [2024-10-06 11:26:31.130481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.285 [2024-10-06 11:26:31.130496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.285 [2024-10-06 11:26:31.130503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71048 len:8 PRP1 0x0 PRP2 0x0 00:31:40.285 [2024-10-06 11:26:31.130511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.285 [2024-10-06 11:26:31.130526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.285 [2024-10-06 11:26:31.130533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71056 len:8 PRP1 0x0 PRP2 0x0 00:31:40.285 [2024-10-06 11:26:31.130541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.285 [2024-10-06 11:26:31.130556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.285 [2024-10-06 11:26:31.130563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71064 len:8 PRP1 0x0 PRP2 0x0 00:31:40.285 [2024-10-06 11:26:31.130571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.285 [2024-10-06 11:26:31.130585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.285 [2024-10-06 11:26:31.130592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71072 len:8 PRP1 0x0 PRP2 0x0 00:31:40.285 [2024-10-06 11:26:31.130600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.285 [2024-10-06 11:26:31.130616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.285 [2024-10-06 11:26:31.130623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71080 len:8 PRP1 0x0 PRP2 0x0 00:31:40.285 [2024-10-06 11:26:31.130631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.285 [2024-10-06 11:26:31.130646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.285 [2024-10-06 11:26:31.130653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71088 len:8 PRP1 0x0 PRP2 0x0 00:31:40.285 [2024-10-06 11:26:31.130661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.285 [2024-10-06 11:26:31.130676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.285 [2024-10-06 11:26:31.130684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71096 len:8 PRP1 0x0 PRP2 0x0 00:31:40.285 [2024-10-06 11:26:31.130692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.285 [2024-10-06 11:26:31.130707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.285 [2024-10-06 11:26:31.130715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71104 len:8 PRP1 0x0 PRP2 0x0 00:31:40.285 [2024-10-06 11:26:31.130723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.285 [2024-10-06 11:26:31.130737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.285 [2024-10-06 11:26:31.130744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71112 len:8 PRP1 0x0 PRP2 0x0 00:31:40.285 [2024-10-06 11:26:31.130752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.285 [2024-10-06 11:26:31.130767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.285 [2024-10-06 11:26:31.130774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71120 len:8 PRP1 0x0 PRP2 0x0 00:31:40.285 [2024-10-06 11:26:31.130783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.285 [2024-10-06 11:26:31.130797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.285 [2024-10-06 11:26:31.130804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71128 len:8 PRP1 0x0 PRP2 0x0 00:31:40.285 [2024-10-06 11:26:31.130812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.285 [2024-10-06 11:26:31.130827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.285 [2024-10-06 11:26:31.130834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71136 len:8 PRP1 0x0 PRP2 0x0 00:31:40.285 [2024-10-06 11:26:31.130842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130887] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14b94f0 was disconnected and freed. reset controller. 00:31:40.285 [2024-10-06 11:26:31.130898] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:40.285 [2024-10-06 11:26:31.130924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.285 [2024-10-06 11:26:31.130934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.285 [2024-10-06 11:26:31.130952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.285 [2024-10-06 11:26:31.130972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.285 [2024-10-06 11:26:31.130990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.285 [2024-10-06 11:26:31.130998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:40.285 [2024-10-06 11:26:31.131024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14987a0 (9): Bad file descriptor 00:31:40.285 [2024-10-06 11:26:31.134735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:40.285 [2024-10-06 11:26:31.170686] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:40.285 11061.30 IOPS, 43.21 MiB/s 11096.00 IOPS, 43.34 MiB/s 11108.25 IOPS, 43.39 MiB/s 11113.69 IOPS, 43.41 MiB/s 11126.71 IOPS, 43.46 MiB/s 00:31:40.285 Latency(us) 00:31:40.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.285 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:40.286 Verification LBA range: start 0x0 length 0x4000 00:31:40.286 NVMe0n1 : 15.00 11129.97 43.48 440.74 0.00 11040.99 620.25 21845.33 00:31:40.286 =================================================================================================================== 00:31:40.286 Total : 11129.97 43.48 440.74 0.00 11040.99 620.25 21845.33 00:31:40.286 Received shutdown signal, test time was about 15.000000 seconds 00:31:40.286 00:31:40.286 Latency(us) 00:31:40.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.286 =================================================================================================================== 00:31:40.286 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:40.286 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:40.286 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:40.286 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:40.286 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2213541 00:31:40.286 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:40.286 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2213541 /var/tmp/bdevperf.sock 00:31:40.286 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2213541 ']' 00:31:40.286 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:40.286 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:40.286 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:40.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:40.286 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:40.286 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:40.286 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:40.286 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:40.286 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:40.286 [2024-10-06 11:26:37.620765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:40.286 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:40.286 [2024-10-06 11:26:37.813318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:40.545 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:40.804 NVMe0n1 00:31:40.804 11:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:41.370 00:31:41.370 11:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:41.629 00:31:41.629 11:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:41.629 11:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:41.888 11:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:42.146 11:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:45.437 11:26:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:45.437 11:26:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:45.437 11:26:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2214374 00:31:45.437 11:26:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:45.437 11:26:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2214374 00:31:46.376 { 00:31:46.376 "results": [ 00:31:46.376 { 00:31:46.376 "job": "NVMe0n1", 00:31:46.376 "core_mask": "0x1", 00:31:46.376 "workload": "verify", 00:31:46.376 "status": "finished", 00:31:46.376 "verify_range": { 00:31:46.376 "start": 0, 00:31:46.376 "length": 16384 00:31:46.377 }, 00:31:46.377 "queue_depth": 128, 00:31:46.377 "io_size": 4096, 00:31:46.377 "runtime": 1.014225, 00:31:46.377 "iops": 11119.820552638714, 00:31:46.377 "mibps": 43.436799033744975, 00:31:46.377 "io_failed": 0, 00:31:46.377 "io_timeout": 0, 00:31:46.377 "avg_latency_us": 11469.382479500757, 00:31:46.377 "min_latency_us": 2106.5142857142855, 00:31:46.377 "max_latency_us": 9611.946666666667 00:31:46.377 } 00:31:46.377 ], 00:31:46.377 "core_count": 1 00:31:46.377 } 00:31:46.377 11:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:46.377 [2024-10-06 11:26:37.267248] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:31:46.377 [2024-10-06 11:26:37.267302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213541 ] 00:31:46.377 [2024-10-06 11:26:37.323345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.377 [2024-10-06 11:26:37.359961] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.377 [2024-10-06 11:26:39.465729] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:46.377 [2024-10-06 11:26:39.465773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.377 [2024-10-06 11:26:39.465784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.377 [2024-10-06 11:26:39.465792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.377 [2024-10-06 11:26:39.465799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.377 [2024-10-06 11:26:39.465806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.377 [2024-10-06 11:26:39.465813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.377 [2024-10-06 11:26:39.465819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.377 [2024-10-06 11:26:39.465826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.377 [2024-10-06 11:26:39.465832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.377 [2024-10-06 11:26:39.465856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.377 [2024-10-06 11:26:39.465870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20087a0 (9): Bad file descriptor 00:31:46.377 [2024-10-06 11:26:39.558187] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:46.377 Running I/O for 1 seconds... 00:31:46.377 11041.00 IOPS, 43.13 MiB/s 00:31:46.377 Latency(us) 00:31:46.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.377 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:46.377 Verification LBA range: start 0x0 length 0x4000 00:31:46.377 NVMe0n1 : 1.01 11119.82 43.44 0.00 0.00 11469.38 2106.51 9611.95 00:31:46.377 =================================================================================================================== 00:31:46.377 Total : 11119.82 43.44 0.00 0.00 11469.38 2106.51 9611.95 00:31:46.377 11:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:46.377 11:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:46.636 11:26:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:46.904 11:26:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:46.904 11:26:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:46.904 11:26:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:47.167 11:26:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:50.456 11:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:50.456 11:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:50.456 11:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2213541 00:31:50.456 11:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2213541 ']' 00:31:50.456 11:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2213541 00:31:50.456 11:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:31:50.456 11:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:50.457 11:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2213541 00:31:50.457 11:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:50.457 11:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:50.457 11:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2213541' 00:31:50.457 killing process with pid 2213541 00:31:50.457 11:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2213541 00:31:50.457 11:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2213541 00:31:50.715 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:50.715 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:50.979 rmmod nvme_tcp 00:31:50.979 rmmod nvme_fabrics 00:31:50.979 rmmod nvme_keyring 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 2210612 ']' 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 2210612 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2210612 ']' 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2210612 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2210612 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2210612' 00:31:50.979 killing process with pid 2210612 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2210612 00:31:50.979 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2210612 00:31:51.241 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:51.241 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:51.241 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:51.241 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:31:51.241 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:51.241 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:31:51.241 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:31:51.241 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:51.241 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:51.241 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.241 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:51.241 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.145 11:26:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:53.146 00:31:53.146 real 0m37.386s 00:31:53.146 user 1m59.707s 00:31:53.146 sys 0m7.687s 00:31:53.146 11:26:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:53.146 11:26:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:53.146 ************************************ 00:31:53.146 END TEST nvmf_failover 00:31:53.146 ************************************ 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.407 ************************************ 00:31:53.407 START TEST nvmf_host_discovery 00:31:53.407 ************************************ 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:53.407 * Looking for test storage... 00:31:53.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:53.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.407 --rc genhtml_branch_coverage=1 00:31:53.407 --rc genhtml_function_coverage=1 00:31:53.407 --rc genhtml_legend=1 00:31:53.407 --rc geninfo_all_blocks=1 00:31:53.407 --rc geninfo_unexecuted_blocks=1 00:31:53.407 00:31:53.407 ' 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:53.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.407 --rc genhtml_branch_coverage=1 00:31:53.407 --rc genhtml_function_coverage=1 00:31:53.407 --rc genhtml_legend=1 00:31:53.407 --rc geninfo_all_blocks=1 00:31:53.407 --rc geninfo_unexecuted_blocks=1 00:31:53.407 00:31:53.407 ' 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:53.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.407 --rc genhtml_branch_coverage=1 00:31:53.407 --rc genhtml_function_coverage=1 00:31:53.407 --rc genhtml_legend=1 00:31:53.407 --rc geninfo_all_blocks=1 00:31:53.407 --rc geninfo_unexecuted_blocks=1 00:31:53.407 00:31:53.407 ' 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:53.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.407 --rc genhtml_branch_coverage=1 00:31:53.407 --rc genhtml_function_coverage=1 00:31:53.407 --rc genhtml_legend=1 00:31:53.407 --rc geninfo_all_blocks=1 00:31:53.407 --rc geninfo_unexecuted_blocks=1 00:31:53.407 00:31:53.407 ' 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:53.407 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:53.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:53.408 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:31:53.667 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:58.943 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:58.944 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:58.944 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:58.944 Found net devices under 0000:af:00.0: cvl_0_0 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:58.944 Found net devices under 0000:af:00.1: cvl_0_1 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:58.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:58.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:31:58.944 00:31:58.944 --- 10.0.0.2 ping statistics --- 00:31:58.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.944 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:58.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:58.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:31:58.944 00:31:58.944 --- 10.0.0.1 ping statistics --- 00:31:58.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.944 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=2218600 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 2218600 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2218600 ']' 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:58.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:58.944 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.204 [2024-10-06 11:26:56.546714] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:31:59.204 [2024-10-06 11:26:56.546766] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.204 [2024-10-06 11:26:56.607624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.204 [2024-10-06 11:26:56.647963] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.204 [2024-10-06 11:26:56.648010] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.204 [2024-10-06 11:26:56.648018] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.204 [2024-10-06 11:26:56.648024] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.204 [2024-10-06 11:26:56.648029] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.204 [2024-10-06 11:26:56.648599] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.204 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:59.204 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:31:59.204 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:59.204 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:59.204 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.204 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.204 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:59.204 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.204 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.204 [2024-10-06 11:26:56.776964] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.464 [2024-10-06 11:26:56.789143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.464 null0 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.464 null1 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2218791 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2218791 /tmp/host.sock 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2218791 ']' 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:59.464 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:59.464 11:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.464 [2024-10-06 11:26:56.865504] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:31:59.464 [2024-10-06 11:26:56.865549] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218791 ] 00:31:59.464 [2024-10-06 11:26:56.918996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.464 [2024-10-06 11:26:56.958509] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:59.724 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:59.725 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.984 [2024-10-06 11:26:57.350552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:31:59.984 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:59.985 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:59.985 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.985 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:59.985 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.985 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:59.985 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.985 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:31:59.985 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:00.552 [2024-10-06 11:26:58.115247] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:00.552 [2024-10-06 11:26:58.115271] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:00.552 [2024-10-06 11:26:58.115283] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:00.810 [2024-10-06 11:26:58.201521] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:00.810 [2024-10-06 11:26:58.379491] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:00.810 [2024-10-06 11:26:58.379509] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:01.068 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:01.069 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:01.328 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.329 [2024-10-06 11:26:58.842601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:01.329 [2024-10-06 11:26:58.843609] bdev_nvme.c:7146:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:01.329 [2024-10-06 11:26:58.843630] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:01.329 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:01.588 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.588 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:01.588 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:01.588 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:01.588 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:01.588 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:01.588 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:01.588 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:01.588 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:01.588 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:01.588 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:01.589 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.589 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:01.589 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.589 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:01.589 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.589 [2024-10-06 11:26:58.970008] bdev_nvme.c:7088:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:01.589 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:01.589 11:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:01.589 [2024-10-06 11:26:59.069715] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:01.589 [2024-10-06 11:26:59.069731] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:01.589 [2024-10-06 11:26:59.069736] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:02.525 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:02.525 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:02.525 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:02.525 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:02.525 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:02.525 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.526 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:02.786 [2024-10-06 11:27:00.102708] bdev_nvme.c:7146:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:02.787 [2024-10-06 11:27:00.102736] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:02.787 [2024-10-06 11:27:00.111862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.787 [2024-10-06 11:27:00.111882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.787 [2024-10-06 11:27:00.111896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.787 [2024-10-06 11:27:00.111903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.787 [2024-10-06 11:27:00.111910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.787 [2024-10-06 11:27:00.111917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.787 [2024-10-06 11:27:00.111925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.787 [2024-10-06 11:27:00.111931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.787 [2024-10-06 11:27:00.111938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65f890 is same with the state(6) to be set 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:02.787 [2024-10-06 11:27:00.121873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65f890 (9): Bad file descriptor 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.787 [2024-10-06 11:27:00.131910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:02.787 [2024-10-06 11:27:00.132146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:02.787 [2024-10-06 11:27:00.132171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65f890 with addr=10.0.0.2, port=4420 00:32:02.787 [2024-10-06 11:27:00.132179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65f890 is same with the state(6) to be set 00:32:02.787 [2024-10-06 11:27:00.132192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65f890 (9): Bad file descriptor 00:32:02.787 [2024-10-06 11:27:00.132202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:02.787 [2024-10-06 11:27:00.132209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:02.787 [2024-10-06 11:27:00.132217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:02.787 [2024-10-06 11:27:00.132229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:02.787 [2024-10-06 11:27:00.141965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:02.787 [2024-10-06 11:27:00.142147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:02.787 [2024-10-06 11:27:00.142160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65f890 with addr=10.0.0.2, port=4420 00:32:02.787 [2024-10-06 11:27:00.142167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65f890 is same with the state(6) to be set 00:32:02.787 [2024-10-06 11:27:00.142178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65f890 (9): Bad file descriptor 00:32:02.787 [2024-10-06 11:27:00.142187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:02.787 [2024-10-06 11:27:00.142197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:02.787 [2024-10-06 11:27:00.142204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:02.787 [2024-10-06 11:27:00.142213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:02.787 [2024-10-06 11:27:00.152015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:02.787 [2024-10-06 11:27:00.152276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:02.787 [2024-10-06 11:27:00.152291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65f890 with addr=10.0.0.2, port=4420 00:32:02.787 [2024-10-06 11:27:00.152299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65f890 is same with the state(6) to be set 00:32:02.787 [2024-10-06 11:27:00.152310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65f890 (9): Bad file descriptor 00:32:02.787 [2024-10-06 11:27:00.152319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:02.787 [2024-10-06 11:27:00.152326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:02.787 [2024-10-06 11:27:00.152332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:02.787 [2024-10-06 11:27:00.152342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:02.787 [2024-10-06 11:27:00.162073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:02.787 [2024-10-06 11:27:00.162278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:02.787 [2024-10-06 11:27:00.162290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65f890 with addr=10.0.0.2, port=4420 00:32:02.787 [2024-10-06 11:27:00.162298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65f890 is same with the state(6) to be set 00:32:02.787 [2024-10-06 11:27:00.162309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65f890 (9): Bad file descriptor 00:32:02.787 [2024-10-06 11:27:00.162321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:02.787 [2024-10-06 11:27:00.162329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:02.787 [2024-10-06 11:27:00.162337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.787 [2024-10-06 11:27:00.162346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:02.787 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:02.787 [2024-10-06 11:27:00.172128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:02.787 [2024-10-06 11:27:00.172328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:02.787 [2024-10-06 11:27:00.172346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65f890 with addr=10.0.0.2, port=4420 00:32:02.787 [2024-10-06 11:27:00.172354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65f890 is same with the state(6) to be set 00:32:02.787 [2024-10-06 11:27:00.172365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65f890 (9): Bad file descriptor 00:32:02.787 [2024-10-06 11:27:00.172375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:02.787 [2024-10-06 11:27:00.172381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:02.787 [2024-10-06 11:27:00.172388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:02.787 [2024-10-06 11:27:00.172397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:02.787 [2024-10-06 11:27:00.182183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:02.787 [2024-10-06 11:27:00.182319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:02.787 [2024-10-06 11:27:00.182331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65f890 with addr=10.0.0.2, port=4420 00:32:02.787 [2024-10-06 11:27:00.182338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65f890 is same with the state(6) to be set 00:32:02.787 [2024-10-06 11:27:00.182348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65f890 (9): Bad file descriptor 00:32:02.788 [2024-10-06 11:27:00.182356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:02.788 [2024-10-06 11:27:00.182362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:02.788 [2024-10-06 11:27:00.182368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:02.788 [2024-10-06 11:27:00.182377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:02.788 [2024-10-06 11:27:00.188481] bdev_nvme.c:6951:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:02.788 [2024-10-06 11:27:00.188496] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:02.788 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.048 11:27:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:03.985 [2024-10-06 11:27:01.513569] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:03.985 [2024-10-06 11:27:01.513586] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:03.985 [2024-10-06 11:27:01.513598] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:04.245 [2024-10-06 11:27:01.599860] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:04.245 [2024-10-06 11:27:01.780978] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:04.245 [2024-10-06 11:27:01.781004] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:04.245 request: 00:32:04.245 { 00:32:04.245 "name": "nvme", 00:32:04.245 "trtype": "tcp", 00:32:04.245 "traddr": "10.0.0.2", 00:32:04.245 "adrfam": "ipv4", 00:32:04.245 "trsvcid": "8009", 00:32:04.245 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:04.245 "wait_for_attach": true, 00:32:04.245 "method": "bdev_nvme_start_discovery", 00:32:04.245 "req_id": 1 00:32:04.245 } 00:32:04.245 Got JSON-RPC error response 00:32:04.245 response: 00:32:04.245 { 00:32:04.245 "code": -17, 00:32:04.245 "message": "File exists" 00:32:04.245 } 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:04.245 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:04.504 request: 00:32:04.504 { 00:32:04.504 "name": "nvme_second", 00:32:04.504 "trtype": "tcp", 00:32:04.504 "traddr": "10.0.0.2", 00:32:04.504 "adrfam": "ipv4", 00:32:04.504 "trsvcid": "8009", 00:32:04.504 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:04.504 "wait_for_attach": true, 00:32:04.504 "method": "bdev_nvme_start_discovery", 00:32:04.504 "req_id": 1 00:32:04.504 } 00:32:04.504 Got JSON-RPC error response 00:32:04.504 response: 00:32:04.504 { 00:32:04.504 "code": -17, 00:32:04.504 "message": "File exists" 00:32:04.504 } 00:32:04.504 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:04.505 11:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.505 11:27:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:05.442 [2024-10-06 11:27:03.008675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.442 [2024-10-06 11:27:03.008703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68f210 with addr=10.0.0.2, port=8010 00:32:05.442 [2024-10-06 11:27:03.008716] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:05.442 [2024-10-06 11:27:03.008722] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:05.442 [2024-10-06 11:27:03.008728] bdev_nvme.c:7226:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:06.822 [2024-10-06 11:27:04.011107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.822 [2024-10-06 11:27:04.011131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68f210 with addr=10.0.0.2, port=8010 00:32:06.822 [2024-10-06 11:27:04.011143] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:06.822 [2024-10-06 11:27:04.011149] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:06.822 [2024-10-06 11:27:04.011154] bdev_nvme.c:7226:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:07.765 [2024-10-06 11:27:05.013243] bdev_nvme.c:7207:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:07.765 request: 00:32:07.765 { 00:32:07.766 "name": "nvme_second", 00:32:07.766 "trtype": "tcp", 00:32:07.766 "traddr": "10.0.0.2", 00:32:07.766 "adrfam": "ipv4", 00:32:07.766 "trsvcid": "8010", 00:32:07.766 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:07.766 "wait_for_attach": false, 00:32:07.766 "attach_timeout_ms": 3000, 00:32:07.766 "method": "bdev_nvme_start_discovery", 00:32:07.766 "req_id": 1 00:32:07.766 } 00:32:07.766 Got JSON-RPC error response 00:32:07.766 response: 00:32:07.766 { 00:32:07.766 "code": -110, 00:32:07.766 "message": "Connection timed out" 00:32:07.766 } 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2218791 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:07.766 rmmod nvme_tcp 00:32:07.766 rmmod nvme_fabrics 00:32:07.766 rmmod nvme_keyring 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 2218600 ']' 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 2218600 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 2218600 ']' 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 2218600 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2218600 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2218600' 00:32:07.766 killing process with pid 2218600 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 2218600 00:32:07.766 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 2218600 00:32:08.027 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:08.027 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:08.027 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:08.027 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:08.027 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:32:08.027 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:08.027 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:32:08.027 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:08.027 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:08.027 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.027 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:08.027 11:27:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.936 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:09.936 00:32:09.936 real 0m16.682s 00:32:09.936 user 0m20.203s 00:32:09.936 sys 0m5.462s 00:32:09.936 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:09.936 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.936 ************************************ 00:32:09.936 END TEST nvmf_host_discovery 00:32:09.936 ************************************ 00:32:09.936 11:27:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:09.936 11:27:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:09.936 11:27:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:09.936 11:27:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.197 ************************************ 00:32:10.197 START TEST nvmf_host_multipath_status 00:32:10.197 ************************************ 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:10.197 * Looking for test storage... 00:32:10.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:10.197 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:10.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.198 --rc genhtml_branch_coverage=1 00:32:10.198 --rc genhtml_function_coverage=1 00:32:10.198 --rc genhtml_legend=1 00:32:10.198 --rc geninfo_all_blocks=1 00:32:10.198 --rc geninfo_unexecuted_blocks=1 00:32:10.198 00:32:10.198 ' 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:10.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.198 --rc genhtml_branch_coverage=1 00:32:10.198 --rc genhtml_function_coverage=1 00:32:10.198 --rc genhtml_legend=1 00:32:10.198 --rc geninfo_all_blocks=1 00:32:10.198 --rc geninfo_unexecuted_blocks=1 00:32:10.198 00:32:10.198 ' 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:10.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.198 --rc genhtml_branch_coverage=1 00:32:10.198 --rc genhtml_function_coverage=1 00:32:10.198 --rc genhtml_legend=1 00:32:10.198 --rc geninfo_all_blocks=1 00:32:10.198 --rc geninfo_unexecuted_blocks=1 00:32:10.198 00:32:10.198 ' 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:10.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.198 --rc genhtml_branch_coverage=1 00:32:10.198 --rc genhtml_function_coverage=1 00:32:10.198 --rc genhtml_legend=1 00:32:10.198 --rc geninfo_all_blocks=1 00:32:10.198 --rc geninfo_unexecuted_blocks=1 00:32:10.198 00:32:10.198 ' 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:10.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:10.198 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:10.199 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:10.199 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:10.199 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:10.199 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:10.199 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:10.199 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:10.199 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:10.199 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:10.199 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:10.199 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:10.199 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:10.199 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.199 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:10.199 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.199 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:10.199 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:10.199 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:10.199 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:15.472 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:15.472 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:15.472 Found net devices under 0000:af:00.0: cvl_0_0 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:15.472 Found net devices under 0000:af:00.1: cvl_0_1 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:15.472 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:15.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:15.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:32:15.732 00:32:15.732 --- 10.0.0.2 ping statistics --- 00:32:15.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.732 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:15.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:15.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:32:15.732 00:32:15.732 --- 10.0.0.1 ping statistics --- 00:32:15.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.732 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=2224106 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 2224106 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2224106 ']' 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:15.732 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:15.732 [2024-10-06 11:27:13.271475] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:32:15.732 [2024-10-06 11:27:13.271516] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:15.991 [2024-10-06 11:27:13.329129] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:15.991 [2024-10-06 11:27:13.369627] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:15.991 [2024-10-06 11:27:13.369667] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:15.991 [2024-10-06 11:27:13.369674] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:15.991 [2024-10-06 11:27:13.369680] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:15.991 [2024-10-06 11:27:13.369686] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:15.991 [2024-10-06 11:27:13.370411] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.991 [2024-10-06 11:27:13.370413] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.991 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:15.991 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:15.991 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:15.991 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:15.991 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:15.991 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:15.991 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2224106 00:32:15.991 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:16.250 [2024-10-06 11:27:13.668094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:16.250 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:16.509 Malloc0 00:32:16.509 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:16.771 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:16.771 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:17.068 [2024-10-06 11:27:14.455355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.068 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:17.366 [2024-10-06 11:27:14.651867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:17.366 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2224350 00:32:17.366 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:17.366 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:17.366 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2224350 /var/tmp/bdevperf.sock 00:32:17.366 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2224350 ']' 00:32:17.366 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:17.366 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:17.366 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:17.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:17.366 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:17.366 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:17.366 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:17.367 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:17.367 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:17.625 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:32:17.884 Nvme0n1 00:32:17.884 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:18.453 Nvme0n1 00:32:18.453 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:18.453 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:20.359 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:20.359 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:20.627 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:20.627 11:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:21.564 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:21.564 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:21.564 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:21.564 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:21.822 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:21.822 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:21.823 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:21.823 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:22.081 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:22.081 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:22.081 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.081 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:22.340 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.340 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:22.340 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.340 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:22.599 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.599 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:22.599 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.599 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:22.599 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.599 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:22.599 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:22.599 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.858 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.858 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:22.858 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:23.117 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:23.377 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:24.314 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:24.314 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:24.314 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.314 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:24.573 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:24.573 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:24.573 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.573 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:24.832 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:24.832 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:24.832 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.832 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:24.832 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:24.832 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:24.832 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.832 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:25.091 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:25.091 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:25.091 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:25.091 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:25.351 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:25.351 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:25.351 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:25.351 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:25.611 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:25.611 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:25.611 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:25.611 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:25.870 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:27.250 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:27.250 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:27.250 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.250 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:27.250 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.250 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:27.250 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.250 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:27.250 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:27.250 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:27.250 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.250 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:27.509 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.509 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:27.509 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.509 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:27.769 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.769 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:27.769 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.769 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:28.028 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:28.028 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:28.028 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:28.028 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:28.028 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:28.028 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:28.028 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:28.287 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:28.545 11:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:29.483 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:29.483 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:29.483 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.483 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:29.743 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:29.743 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:29.743 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.743 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:30.001 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:30.001 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:30.001 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:30.001 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:30.260 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:30.260 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:30.260 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:30.260 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:30.519 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:30.519 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:30.519 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:30.519 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:30.519 11:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:30.519 11:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:30.519 11:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:30.519 11:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:30.778 11:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:30.778 11:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:30.778 11:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:31.036 11:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:31.295 11:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:32.233 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:32.233 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:32.233 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.233 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:32.492 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:32.492 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:32.492 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.492 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:32.492 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:32.492 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:32.492 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.492 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:32.751 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:32.751 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:32.751 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.751 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:33.010 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:33.010 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:33.010 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.010 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:33.269 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:33.270 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:33.270 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.270 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:33.529 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:33.529 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:33.529 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:33.529 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:33.788 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:34.727 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:34.727 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:34.727 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.727 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:34.986 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:34.986 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:34.986 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.986 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:35.246 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:35.246 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:35.246 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.246 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:35.505 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:35.505 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:35.505 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:35.505 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.505 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:35.505 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:35.505 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.505 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:35.765 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:35.765 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:35.765 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.765 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:36.024 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.024 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:36.284 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:36.284 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:36.284 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:36.543 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:37.923 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:37.923 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:37.923 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.923 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:37.923 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.923 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:37.923 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.923 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:37.923 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.923 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:37.923 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.923 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:38.183 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.183 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:38.183 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.183 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:38.441 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.441 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:38.441 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:38.441 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.700 11:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.700 11:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:38.700 11:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.700 11:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:38.959 11:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.959 11:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:38.959 11:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:38.959 11:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:39.218 11:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:40.157 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:40.157 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:40.157 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.157 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:40.416 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:40.416 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:40.416 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.416 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:40.674 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.674 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:40.674 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.674 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:40.946 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.946 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:40.946 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.946 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:41.206 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.206 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:41.207 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.207 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:41.207 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.207 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:41.207 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.207 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:41.465 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.465 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:41.465 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:41.725 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:41.984 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:42.922 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:42.922 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:42.922 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.922 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:43.181 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.181 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:43.181 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.181 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:43.439 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.439 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:43.439 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.439 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:43.439 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.439 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:43.439 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.439 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:43.697 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.697 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:43.697 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.697 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:43.957 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.957 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:43.957 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.957 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:44.216 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.216 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:44.216 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:44.475 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:44.475 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:45.854 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:45.854 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:45.854 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.854 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:45.854 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.854 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:45.854 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.854 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:46.113 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:46.113 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:46.113 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.113 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:46.113 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:46.113 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:46.113 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.113 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:46.372 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:46.372 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:46.372 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.372 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:46.631 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:46.631 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:46.631 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.631 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:46.891 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:46.891 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2224350 00:32:46.891 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2224350 ']' 00:32:46.891 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2224350 00:32:46.891 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:32:46.891 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:46.891 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2224350 00:32:46.891 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:32:46.891 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:32:46.891 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2224350' 00:32:46.891 killing process with pid 2224350 00:32:46.891 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2224350 00:32:46.891 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2224350 00:32:46.891 { 00:32:46.891 "results": [ 00:32:46.891 { 00:32:46.891 "job": "Nvme0n1", 00:32:46.891 "core_mask": "0x4", 00:32:46.891 "workload": "verify", 00:32:46.891 "status": "terminated", 00:32:46.891 "verify_range": { 00:32:46.891 "start": 0, 00:32:46.891 "length": 16384 00:32:46.891 }, 00:32:46.891 "queue_depth": 128, 00:32:46.891 "io_size": 4096, 00:32:46.891 "runtime": 28.387136, 00:32:46.891 "iops": 10429.93558772537, 00:32:46.891 "mibps": 40.741935889552224, 00:32:46.891 "io_failed": 0, 00:32:46.891 "io_timeout": 0, 00:32:46.891 "avg_latency_us": 12251.5296275924, 00:32:46.891 "min_latency_us": 323.7790476190476, 00:32:46.891 "max_latency_us": 3019898.88 00:32:46.891 } 00:32:46.891 ], 00:32:46.891 "core_count": 1 00:32:46.891 } 00:32:47.154 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2224350 00:32:47.154 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:47.154 [2024-10-06 11:27:14.719705] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:32:47.154 [2024-10-06 11:27:14.719759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224350 ] 00:32:47.154 [2024-10-06 11:27:14.770654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.154 [2024-10-06 11:27:14.809855] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:47.154 [2024-10-06 11:27:15.667130] bdev_nvme.c:5607:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:32:47.154 Running I/O for 90 seconds... 00:32:47.154 11034.00 IOPS, 43.10 MiB/s 11125.50 IOPS, 43.46 MiB/s 11201.33 IOPS, 43.76 MiB/s 11215.75 IOPS, 43.81 MiB/s 11206.00 IOPS, 43.77 MiB/s 11207.17 IOPS, 43.78 MiB/s 11210.00 IOPS, 43.79 MiB/s 11224.00 IOPS, 43.84 MiB/s 11224.67 IOPS, 43.85 MiB/s 11233.30 IOPS, 43.88 MiB/s 11237.36 IOPS, 43.90 MiB/s 11227.42 IOPS, 43.86 MiB/s [2024-10-06 11:27:28.445026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.154 [2024-10-06 11:27:28.445067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:47.154 [2024-10-06 11:27:28.445117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.154 [2024-10-06 11:27:28.445126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:47.154 [2024-10-06 11:27:28.445139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.154 [2024-10-06 11:27:28.445147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:47.154 [2024-10-06 11:27:28.445160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.154 [2024-10-06 11:27:28.445166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:47.154 [2024-10-06 11:27:28.445179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.154 [2024-10-06 11:27:28.445186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:47.154 [2024-10-06 11:27:28.445198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.154 [2024-10-06 11:27:28.445205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:47.154 [2024-10-06 11:27:28.445218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.155 [2024-10-06 11:27:28.445225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.445237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.155 [2024-10-06 11:27:28.445243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.445255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.155 [2024-10-06 11:27:28.445262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.445281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.155 [2024-10-06 11:27:28.445289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.446473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.155 [2024-10-06 11:27:28.446485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.446501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.446508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.446523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.446530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.446545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.446552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.446566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.446573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.446588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.446595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.446609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.446616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.446630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.446637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.446652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.446658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.446672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.446679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.446693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.446700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.446717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.446724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.446738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.446745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.446760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.446766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.446781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.446788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.446804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.446811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.447077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.447091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.447109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.447116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.447132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.447140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.447156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.447163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.447179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.447186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.447201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.447208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.447224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.447231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.447246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.447256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.447271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.447278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.447293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.447301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.447316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.447323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.447339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.447345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.447361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.447368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.447384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.155 [2024-10-06 11:27:28.447392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:47.155 [2024-10-06 11:27:28.447407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.447981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.447998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.448005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.448022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.448028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.448048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.448055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.448078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.448086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.448104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.448111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.448128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.448135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.448153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.448159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.448178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.448185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.448203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.448210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.448227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.448234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.448253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.448260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.448277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.448283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.448300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.448307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.448325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.156 [2024-10-06 11:27:28.448331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.156 [2024-10-06 11:27:28.448385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.157 [2024-10-06 11:27:28.448394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:28.448413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.157 [2024-10-06 11:27:28.448420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:28.448438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.157 [2024-10-06 11:27:28.448444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:28.448463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.157 [2024-10-06 11:27:28.448470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:47.157 10862.62 IOPS, 42.43 MiB/s 10086.71 IOPS, 39.40 MiB/s 9414.27 IOPS, 36.77 MiB/s 9140.19 IOPS, 35.70 MiB/s 9274.18 IOPS, 36.23 MiB/s 9383.39 IOPS, 36.65 MiB/s 9595.74 IOPS, 37.48 MiB/s 9785.90 IOPS, 38.23 MiB/s 9923.81 IOPS, 38.76 MiB/s 9982.09 IOPS, 38.99 MiB/s 10030.57 IOPS, 39.18 MiB/s 10126.58 IOPS, 39.56 MiB/s 10250.08 IOPS, 40.04 MiB/s 10369.04 IOPS, 40.50 MiB/s [2024-10-06 11:27:42.003880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.157 [2024-10-06 11:27:42.003917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.003966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.157 [2024-10-06 11:27:42.003975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.003988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.157 [2024-10-06 11:27:42.003995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.157 [2024-10-06 11:27:42.004014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.157 [2024-10-06 11:27:42.004078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.157 [2024-10-06 11:27:42.004097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.157 [2024-10-06 11:27:42.004117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.004643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.004650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.005031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.005043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.005063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.005071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.005083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.005090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:47.157 [2024-10-06 11:27:42.005102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.157 [2024-10-06 11:27:42.005109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.158 [2024-10-06 11:27:42.005418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.158 [2024-10-06 11:27:42.005576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:47.158 [2024-10-06 11:27:42.005799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.158 [2024-10-06 11:27:42.005809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:47.158 10401.44 IOPS, 40.63 MiB/s 10425.39 IOPS, 40.72 MiB/s Received shutdown signal, test time was about 28.387772 seconds 00:32:47.158 00:32:47.158 Latency(us) 00:32:47.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.158 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:47.158 Verification LBA range: start 0x0 length 0x4000 00:32:47.158 Nvme0n1 : 28.39 10429.94 40.74 0.00 0.00 12251.53 323.78 3019898.88 00:32:47.158 =================================================================================================================== 00:32:47.158 Total : 10429.94 40.74 0.00 0.00 12251.53 323.78 3019898.88 00:32:47.158 [2024-10-06 11:27:44.296787] app.c:1033:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:32:47.158 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:47.158 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:47.158 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:47.158 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:47.158 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:47.158 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:32:47.158 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:47.158 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:32:47.158 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:47.158 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:47.158 rmmod nvme_tcp 00:32:47.158 rmmod nvme_fabrics 00:32:47.158 rmmod nvme_keyring 00:32:47.418 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:47.418 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:32:47.418 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:32:47.418 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 2224106 ']' 00:32:47.418 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 2224106 00:32:47.418 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2224106 ']' 00:32:47.418 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2224106 00:32:47.418 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:32:47.418 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:47.418 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2224106 00:32:47.418 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:47.418 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:47.418 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2224106' 00:32:47.418 killing process with pid 2224106 00:32:47.418 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2224106 00:32:47.418 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2224106 00:32:47.678 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:47.678 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:47.678 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:47.678 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:32:47.678 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:32:47.678 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:32:47.678 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:47.678 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:47.678 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:47.678 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.678 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.678 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.584 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:49.584 00:32:49.584 real 0m39.555s 00:32:49.584 user 1m47.699s 00:32:49.584 sys 0m11.222s 00:32:49.584 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:49.584 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:49.584 ************************************ 00:32:49.584 END TEST nvmf_host_multipath_status 00:32:49.584 ************************************ 00:32:49.584 11:27:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:49.584 11:27:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:49.584 11:27:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:49.584 11:27:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.584 ************************************ 00:32:49.584 START TEST nvmf_discovery_remove_ifc 00:32:49.584 ************************************ 00:32:49.584 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:49.844 * Looking for test storage... 00:32:49.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:49.844 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:49.844 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:32:49.844 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:49.844 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:49.844 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:49.844 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:49.844 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:49.844 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.844 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:32:49.844 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:32:49.844 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:32:49.844 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:32:49.844 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:32:49.844 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:32:49.844 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:49.844 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:49.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.845 --rc genhtml_branch_coverage=1 00:32:49.845 --rc genhtml_function_coverage=1 00:32:49.845 --rc genhtml_legend=1 00:32:49.845 --rc geninfo_all_blocks=1 00:32:49.845 --rc geninfo_unexecuted_blocks=1 00:32:49.845 00:32:49.845 ' 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:49.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.845 --rc genhtml_branch_coverage=1 00:32:49.845 --rc genhtml_function_coverage=1 00:32:49.845 --rc genhtml_legend=1 00:32:49.845 --rc geninfo_all_blocks=1 00:32:49.845 --rc geninfo_unexecuted_blocks=1 00:32:49.845 00:32:49.845 ' 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:49.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.845 --rc genhtml_branch_coverage=1 00:32:49.845 --rc genhtml_function_coverage=1 00:32:49.845 --rc genhtml_legend=1 00:32:49.845 --rc geninfo_all_blocks=1 00:32:49.845 --rc geninfo_unexecuted_blocks=1 00:32:49.845 00:32:49.845 ' 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:49.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.845 --rc genhtml_branch_coverage=1 00:32:49.845 --rc genhtml_function_coverage=1 00:32:49.845 --rc genhtml_legend=1 00:32:49.845 --rc geninfo_all_blocks=1 00:32:49.845 --rc geninfo_unexecuted_blocks=1 00:32:49.845 00:32:49.845 ' 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:49.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:49.845 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.846 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.846 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.846 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:49.846 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:49.846 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:32:49.846 11:27:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:55.122 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:55.122 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:55.122 Found net devices under 0000:af:00.0: cvl_0_0 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:55.122 Found net devices under 0000:af:00.1: cvl_0_1 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:55.122 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:55.123 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:55.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:55.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:32:55.382 00:32:55.382 --- 10.0.0.2 ping statistics --- 00:32:55.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.382 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:55.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:55.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:32:55.382 00:32:55.382 --- 10.0.0.1 ping statistics --- 00:32:55.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.382 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=2232688 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 2232688 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2232688 ']' 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:55.382 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:55.382 [2024-10-06 11:27:52.814343] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:32:55.382 [2024-10-06 11:27:52.814385] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:55.382 [2024-10-06 11:27:52.872476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.382 [2024-10-06 11:27:52.910676] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:55.382 [2024-10-06 11:27:52.910713] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:55.382 [2024-10-06 11:27:52.910720] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:55.382 [2024-10-06 11:27:52.910726] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:55.382 [2024-10-06 11:27:52.910732] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:55.382 [2024-10-06 11:27:52.911283] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.642 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:55.642 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:32:55.642 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:55.642 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:55.642 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:55.642 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:55.642 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:55.642 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.642 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:55.642 [2024-10-06 11:27:53.043217] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:55.642 [2024-10-06 11:27:53.051380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:55.642 null0 00:32:55.642 [2024-10-06 11:27:53.083365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:55.642 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.642 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2232709 00:32:55.642 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:55.642 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2232709 /tmp/host.sock 00:32:55.642 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2232709 ']' 00:32:55.642 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:32:55.642 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:55.642 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:55.642 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:55.642 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:55.642 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:55.642 [2024-10-06 11:27:53.153483] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:32:55.642 [2024-10-06 11:27:53.153525] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2232709 ] 00:32:55.642 [2024-10-06 11:27:53.208487] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.902 [2024-10-06 11:27:53.248709] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.902 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:55.902 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:32:55.902 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:55.902 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:55.902 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.902 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:55.902 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.902 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:55.902 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.902 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:55.902 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.902 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:55.902 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.902 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:57.281 [2024-10-06 11:27:54.439607] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:57.281 [2024-10-06 11:27:54.439629] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:57.281 [2024-10-06 11:27:54.439642] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:57.281 [2024-10-06 11:27:54.567028] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:57.281 [2024-10-06 11:27:54.794126] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:57.281 [2024-10-06 11:27:54.794171] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:57.281 [2024-10-06 11:27:54.794192] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:57.281 [2024-10-06 11:27:54.794203] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:57.281 [2024-10-06 11:27:54.794221] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:57.281 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.281 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:57.281 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:57.281 [2024-10-06 11:27:54.799179] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x223fe50 was disconnected and freed. delete nvme_qpair. 00:32:57.281 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.281 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:57.281 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.281 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:57.281 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:57.281 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:57.281 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.281 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:57.281 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:57.281 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:57.541 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:57.541 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:57.541 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.541 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:57.541 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.541 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:57.541 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:57.541 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:57.541 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.541 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:57.541 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:58.478 11:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:58.478 11:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:58.478 11:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:58.478 11:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:58.478 11:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.478 11:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:58.478 11:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:58.478 11:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.478 11:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:58.478 11:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:59.857 11:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:59.857 11:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:59.857 11:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:59.857 11:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.857 11:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:59.857 11:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:59.857 11:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:59.857 11:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.857 11:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:59.857 11:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:00.795 11:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:00.795 11:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:00.795 11:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:00.795 11:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:00.795 11:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.795 11:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:00.795 11:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:00.795 11:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.795 11:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:00.795 11:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:01.758 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:01.758 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:01.758 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:01.758 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:01.758 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.758 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:01.758 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:01.758 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.758 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:01.758 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:02.729 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:02.729 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:02.729 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:02.729 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.729 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:02.729 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:02.729 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:02.729 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.729 [2024-10-06 11:28:00.235864] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:02.729 [2024-10-06 11:28:00.235910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.729 [2024-10-06 11:28:00.235922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.729 [2024-10-06 11:28:00.235931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.729 [2024-10-06 11:28:00.235938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.729 [2024-10-06 11:28:00.235945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.729 [2024-10-06 11:28:00.235951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.729 [2024-10-06 11:28:00.235958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.729 [2024-10-06 11:28:00.235964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.729 [2024-10-06 11:28:00.235972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.729 [2024-10-06 11:28:00.235978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.729 [2024-10-06 11:28:00.235985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221c600 is same with the state(6) to be set 00:33:02.729 [2024-10-06 11:28:00.245886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221c600 (9): Bad file descriptor 00:33:02.729 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:02.729 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:02.729 [2024-10-06 11:28:00.255924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:04.107 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:04.107 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:04.107 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:04.107 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.107 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:04.107 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:04.107 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:04.107 [2024-10-06 11:28:01.270080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:04.107 [2024-10-06 11:28:01.270126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c600 with addr=10.0.0.2, port=4420 00:33:04.107 [2024-10-06 11:28:01.270144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221c600 is same with the state(6) to be set 00:33:04.107 [2024-10-06 11:28:01.270173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221c600 (9): Bad file descriptor 00:33:04.107 [2024-10-06 11:28:01.270626] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:04.107 [2024-10-06 11:28:01.270655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:04.107 [2024-10-06 11:28:01.270666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:04.107 [2024-10-06 11:28:01.270678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:04.107 [2024-10-06 11:28:01.270703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.107 [2024-10-06 11:28:01.270715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:04.107 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.107 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:04.107 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:05.045 [2024-10-06 11:28:02.273185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:05.045 [2024-10-06 11:28:02.273205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:05.045 [2024-10-06 11:28:02.273212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:05.045 [2024-10-06 11:28:02.273219] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:33:05.045 [2024-10-06 11:28:02.273231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.045 [2024-10-06 11:28:02.273248] bdev_nvme.c:6915:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:05.045 [2024-10-06 11:28:02.273267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.045 [2024-10-06 11:28:02.273276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.045 [2024-10-06 11:28:02.273285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.045 [2024-10-06 11:28:02.273292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.045 [2024-10-06 11:28:02.273299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.045 [2024-10-06 11:28:02.273306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.045 [2024-10-06 11:28:02.273313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.045 [2024-10-06 11:28:02.273319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.045 [2024-10-06 11:28:02.273326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.045 [2024-10-06 11:28:02.273332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.045 [2024-10-06 11:28:02.273339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:05.045 [2024-10-06 11:28:02.273400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220bd40 (9): Bad file descriptor 00:33:05.045 [2024-10-06 11:28:02.274410] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:05.045 [2024-10-06 11:28:02.274419] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:33:05.045 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:05.045 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:05.045 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:05.045 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:05.045 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.045 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:05.045 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:05.045 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.046 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:05.046 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:05.046 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:05.046 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:05.046 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:05.046 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:05.046 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:05.046 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.046 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:05.046 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:05.046 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:05.046 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.046 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:05.046 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:05.984 11:28:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:05.984 11:28:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:05.984 11:28:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:05.984 11:28:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.984 11:28:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:05.984 11:28:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:05.984 11:28:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:05.984 11:28:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.984 11:28:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:05.984 11:28:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:06.922 [2024-10-06 11:28:04.326198] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:06.922 [2024-10-06 11:28:04.326214] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:06.922 [2024-10-06 11:28:04.326228] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:06.922 [2024-10-06 11:28:04.412492] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:06.922 [2024-10-06 11:28:04.475598] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:06.922 [2024-10-06 11:28:04.475633] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:06.922 [2024-10-06 11:28:04.475650] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:06.922 [2024-10-06 11:28:04.475662] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:06.922 [2024-10-06 11:28:04.475674] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:06.922 [2024-10-06 11:28:04.483737] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x21f1f40 was disconnected and freed. delete nvme_qpair. 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2232709 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2232709 ']' 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2232709 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2232709 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2232709' 00:33:07.182 killing process with pid 2232709 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2232709 00:33:07.182 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2232709 00:33:07.441 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:07.441 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:07.441 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:07.441 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:07.441 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:07.441 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:07.441 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:07.441 rmmod nvme_tcp 00:33:07.441 rmmod nvme_fabrics 00:33:07.441 rmmod nvme_keyring 00:33:07.441 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:07.441 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:07.441 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:07.441 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 2232688 ']' 00:33:07.441 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 2232688 00:33:07.441 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2232688 ']' 00:33:07.441 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2232688 00:33:07.442 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:07.442 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:07.442 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2232688 00:33:07.442 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:07.442 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:07.442 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2232688' 00:33:07.442 killing process with pid 2232688 00:33:07.442 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2232688 00:33:07.442 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2232688 00:33:07.701 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:07.701 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:07.701 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:07.701 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:07.701 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:33:07.701 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:07.701 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:33:07.701 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:07.701 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:07.701 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.701 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.701 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:10.237 00:33:10.237 real 0m20.033s 00:33:10.237 user 0m24.848s 00:33:10.237 sys 0m5.379s 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:10.237 ************************************ 00:33:10.237 END TEST nvmf_discovery_remove_ifc 00:33:10.237 ************************************ 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.237 ************************************ 00:33:10.237 START TEST nvmf_identify_kernel_target 00:33:10.237 ************************************ 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:10.237 * Looking for test storage... 00:33:10.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:10.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.237 --rc genhtml_branch_coverage=1 00:33:10.237 --rc genhtml_function_coverage=1 00:33:10.237 --rc genhtml_legend=1 00:33:10.237 --rc geninfo_all_blocks=1 00:33:10.237 --rc geninfo_unexecuted_blocks=1 00:33:10.237 00:33:10.237 ' 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:10.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.237 --rc genhtml_branch_coverage=1 00:33:10.237 --rc genhtml_function_coverage=1 00:33:10.237 --rc genhtml_legend=1 00:33:10.237 --rc geninfo_all_blocks=1 00:33:10.237 --rc geninfo_unexecuted_blocks=1 00:33:10.237 00:33:10.237 ' 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:10.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.237 --rc genhtml_branch_coverage=1 00:33:10.237 --rc genhtml_function_coverage=1 00:33:10.237 --rc genhtml_legend=1 00:33:10.237 --rc geninfo_all_blocks=1 00:33:10.237 --rc geninfo_unexecuted_blocks=1 00:33:10.237 00:33:10.237 ' 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:10.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.237 --rc genhtml_branch_coverage=1 00:33:10.237 --rc genhtml_function_coverage=1 00:33:10.237 --rc genhtml_legend=1 00:33:10.237 --rc geninfo_all_blocks=1 00:33:10.237 --rc geninfo_unexecuted_blocks=1 00:33:10.237 00:33:10.237 ' 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:10.237 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:10.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:10.238 11:28:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:15.513 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:15.513 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:15.513 Found net devices under 0000:af:00.0: cvl_0_0 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:15.513 Found net devices under 0000:af:00.1: cvl_0_1 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:15.513 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:15.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:15.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:33:15.514 00:33:15.514 --- 10.0.0.2 ping statistics --- 00:33:15.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.514 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:15.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:15.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:33:15.514 00:33:15.514 --- 10.0.0.1 ping statistics --- 00:33:15.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.514 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:15.514 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:18.050 Waiting for block devices as requested 00:33:18.050 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:18.050 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:18.050 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:18.050 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:18.050 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:18.309 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:18.309 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:18.309 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:18.309 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:18.568 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:18.568 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:18.568 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:18.828 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:18.828 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:18.828 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:18.828 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:19.087 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:19.087 No valid GPT data, bailing 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:33:19.087 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:33:19.088 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:19.088 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:33:19.348 00:33:19.348 Discovery Log Number of Records 2, Generation counter 2 00:33:19.348 =====Discovery Log Entry 0====== 00:33:19.348 trtype: tcp 00:33:19.348 adrfam: ipv4 00:33:19.348 subtype: current discovery subsystem 00:33:19.348 treq: not specified, sq flow control disable supported 00:33:19.348 portid: 1 00:33:19.348 trsvcid: 4420 00:33:19.348 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:19.348 traddr: 10.0.0.1 00:33:19.348 eflags: none 00:33:19.348 sectype: none 00:33:19.348 =====Discovery Log Entry 1====== 00:33:19.348 trtype: tcp 00:33:19.348 adrfam: ipv4 00:33:19.348 subtype: nvme subsystem 00:33:19.348 treq: not specified, sq flow control disable supported 00:33:19.348 portid: 1 00:33:19.348 trsvcid: 4420 00:33:19.348 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:19.348 traddr: 10.0.0.1 00:33:19.348 eflags: none 00:33:19.348 sectype: none 00:33:19.348 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:19.348 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:19.348 ===================================================== 00:33:19.348 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:19.348 ===================================================== 00:33:19.348 Controller Capabilities/Features 00:33:19.348 ================================ 00:33:19.348 Vendor ID: 0000 00:33:19.348 Subsystem Vendor ID: 0000 00:33:19.348 Serial Number: 9fefa23700baf97e08e4 00:33:19.348 Model Number: Linux 00:33:19.348 Firmware Version: 6.8.9-20 00:33:19.348 Recommended Arb Burst: 0 00:33:19.348 IEEE OUI Identifier: 00 00 00 00:33:19.348 Multi-path I/O 00:33:19.348 May have multiple subsystem ports: No 00:33:19.348 May have multiple controllers: No 00:33:19.348 Associated with SR-IOV VF: No 00:33:19.348 Max Data Transfer Size: Unlimited 00:33:19.348 Max Number of Namespaces: 0 00:33:19.348 Max Number of I/O Queues: 1024 00:33:19.348 NVMe Specification Version (VS): 1.3 00:33:19.348 NVMe Specification Version (Identify): 1.3 00:33:19.348 Maximum Queue Entries: 1024 00:33:19.348 Contiguous Queues Required: No 00:33:19.348 Arbitration Mechanisms Supported 00:33:19.348 Weighted Round Robin: Not Supported 00:33:19.348 Vendor Specific: Not Supported 00:33:19.348 Reset Timeout: 7500 ms 00:33:19.348 Doorbell Stride: 4 bytes 00:33:19.348 NVM Subsystem Reset: Not Supported 00:33:19.348 Command Sets Supported 00:33:19.348 NVM Command Set: Supported 00:33:19.348 Boot Partition: Not Supported 00:33:19.348 Memory Page Size Minimum: 4096 bytes 00:33:19.348 Memory Page Size Maximum: 4096 bytes 00:33:19.348 Persistent Memory Region: Not Supported 00:33:19.348 Optional Asynchronous Events Supported 00:33:19.348 Namespace Attribute Notices: Not Supported 00:33:19.348 Firmware Activation Notices: Not Supported 00:33:19.348 ANA Change Notices: Not Supported 00:33:19.348 PLE Aggregate Log Change Notices: Not Supported 00:33:19.348 LBA Status Info Alert Notices: Not Supported 00:33:19.348 EGE Aggregate Log Change Notices: Not Supported 00:33:19.348 Normal NVM Subsystem Shutdown event: Not Supported 00:33:19.348 Zone Descriptor Change Notices: Not Supported 00:33:19.348 Discovery Log Change Notices: Supported 00:33:19.348 Controller Attributes 00:33:19.348 128-bit Host Identifier: Not Supported 00:33:19.348 Non-Operational Permissive Mode: Not Supported 00:33:19.349 NVM Sets: Not Supported 00:33:19.349 Read Recovery Levels: Not Supported 00:33:19.349 Endurance Groups: Not Supported 00:33:19.349 Predictable Latency Mode: Not Supported 00:33:19.349 Traffic Based Keep ALive: Not Supported 00:33:19.349 Namespace Granularity: Not Supported 00:33:19.349 SQ Associations: Not Supported 00:33:19.349 UUID List: Not Supported 00:33:19.349 Multi-Domain Subsystem: Not Supported 00:33:19.349 Fixed Capacity Management: Not Supported 00:33:19.349 Variable Capacity Management: Not Supported 00:33:19.349 Delete Endurance Group: Not Supported 00:33:19.349 Delete NVM Set: Not Supported 00:33:19.349 Extended LBA Formats Supported: Not Supported 00:33:19.349 Flexible Data Placement Supported: Not Supported 00:33:19.349 00:33:19.349 Controller Memory Buffer Support 00:33:19.349 ================================ 00:33:19.349 Supported: No 00:33:19.349 00:33:19.349 Persistent Memory Region Support 00:33:19.349 ================================ 00:33:19.349 Supported: No 00:33:19.349 00:33:19.349 Admin Command Set Attributes 00:33:19.349 ============================ 00:33:19.349 Security Send/Receive: Not Supported 00:33:19.349 Format NVM: Not Supported 00:33:19.349 Firmware Activate/Download: Not Supported 00:33:19.349 Namespace Management: Not Supported 00:33:19.349 Device Self-Test: Not Supported 00:33:19.349 Directives: Not Supported 00:33:19.349 NVMe-MI: Not Supported 00:33:19.349 Virtualization Management: Not Supported 00:33:19.349 Doorbell Buffer Config: Not Supported 00:33:19.349 Get LBA Status Capability: Not Supported 00:33:19.349 Command & Feature Lockdown Capability: Not Supported 00:33:19.349 Abort Command Limit: 1 00:33:19.349 Async Event Request Limit: 1 00:33:19.349 Number of Firmware Slots: N/A 00:33:19.349 Firmware Slot 1 Read-Only: N/A 00:33:19.349 Firmware Activation Without Reset: N/A 00:33:19.349 Multiple Update Detection Support: N/A 00:33:19.349 Firmware Update Granularity: No Information Provided 00:33:19.349 Per-Namespace SMART Log: No 00:33:19.349 Asymmetric Namespace Access Log Page: Not Supported 00:33:19.349 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:19.349 Command Effects Log Page: Not Supported 00:33:19.349 Get Log Page Extended Data: Supported 00:33:19.349 Telemetry Log Pages: Not Supported 00:33:19.349 Persistent Event Log Pages: Not Supported 00:33:19.349 Supported Log Pages Log Page: May Support 00:33:19.349 Commands Supported & Effects Log Page: Not Supported 00:33:19.349 Feature Identifiers & Effects Log Page:May Support 00:33:19.349 NVMe-MI Commands & Effects Log Page: May Support 00:33:19.349 Data Area 4 for Telemetry Log: Not Supported 00:33:19.349 Error Log Page Entries Supported: 1 00:33:19.349 Keep Alive: Not Supported 00:33:19.349 00:33:19.349 NVM Command Set Attributes 00:33:19.349 ========================== 00:33:19.349 Submission Queue Entry Size 00:33:19.349 Max: 1 00:33:19.349 Min: 1 00:33:19.349 Completion Queue Entry Size 00:33:19.349 Max: 1 00:33:19.349 Min: 1 00:33:19.349 Number of Namespaces: 0 00:33:19.349 Compare Command: Not Supported 00:33:19.349 Write Uncorrectable Command: Not Supported 00:33:19.349 Dataset Management Command: Not Supported 00:33:19.349 Write Zeroes Command: Not Supported 00:33:19.349 Set Features Save Field: Not Supported 00:33:19.349 Reservations: Not Supported 00:33:19.349 Timestamp: Not Supported 00:33:19.349 Copy: Not Supported 00:33:19.349 Volatile Write Cache: Not Present 00:33:19.349 Atomic Write Unit (Normal): 1 00:33:19.349 Atomic Write Unit (PFail): 1 00:33:19.349 Atomic Compare & Write Unit: 1 00:33:19.349 Fused Compare & Write: Not Supported 00:33:19.349 Scatter-Gather List 00:33:19.349 SGL Command Set: Supported 00:33:19.349 SGL Keyed: Not Supported 00:33:19.349 SGL Bit Bucket Descriptor: Not Supported 00:33:19.349 SGL Metadata Pointer: Not Supported 00:33:19.349 Oversized SGL: Not Supported 00:33:19.349 SGL Metadata Address: Not Supported 00:33:19.349 SGL Offset: Supported 00:33:19.349 Transport SGL Data Block: Not Supported 00:33:19.349 Replay Protected Memory Block: Not Supported 00:33:19.349 00:33:19.349 Firmware Slot Information 00:33:19.349 ========================= 00:33:19.349 Active slot: 0 00:33:19.349 00:33:19.349 00:33:19.349 Error Log 00:33:19.349 ========= 00:33:19.349 00:33:19.349 Active Namespaces 00:33:19.349 ================= 00:33:19.349 Discovery Log Page 00:33:19.349 ================== 00:33:19.349 Generation Counter: 2 00:33:19.349 Number of Records: 2 00:33:19.349 Record Format: 0 00:33:19.349 00:33:19.349 Discovery Log Entry 0 00:33:19.349 ---------------------- 00:33:19.349 Transport Type: 3 (TCP) 00:33:19.349 Address Family: 1 (IPv4) 00:33:19.349 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:19.349 Entry Flags: 00:33:19.349 Duplicate Returned Information: 0 00:33:19.349 Explicit Persistent Connection Support for Discovery: 0 00:33:19.349 Transport Requirements: 00:33:19.349 Secure Channel: Not Specified 00:33:19.349 Port ID: 1 (0x0001) 00:33:19.349 Controller ID: 65535 (0xffff) 00:33:19.349 Admin Max SQ Size: 32 00:33:19.349 Transport Service Identifier: 4420 00:33:19.349 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:19.349 Transport Address: 10.0.0.1 00:33:19.349 Discovery Log Entry 1 00:33:19.349 ---------------------- 00:33:19.349 Transport Type: 3 (TCP) 00:33:19.349 Address Family: 1 (IPv4) 00:33:19.349 Subsystem Type: 2 (NVM Subsystem) 00:33:19.349 Entry Flags: 00:33:19.349 Duplicate Returned Information: 0 00:33:19.349 Explicit Persistent Connection Support for Discovery: 0 00:33:19.349 Transport Requirements: 00:33:19.349 Secure Channel: Not Specified 00:33:19.349 Port ID: 1 (0x0001) 00:33:19.349 Controller ID: 65535 (0xffff) 00:33:19.349 Admin Max SQ Size: 32 00:33:19.349 Transport Service Identifier: 4420 00:33:19.349 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:19.349 Transport Address: 10.0.0.1 00:33:19.349 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:19.349 get_feature(0x01) failed 00:33:19.349 get_feature(0x02) failed 00:33:19.349 get_feature(0x04) failed 00:33:19.349 ===================================================== 00:33:19.349 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:19.349 ===================================================== 00:33:19.349 Controller Capabilities/Features 00:33:19.349 ================================ 00:33:19.349 Vendor ID: 0000 00:33:19.349 Subsystem Vendor ID: 0000 00:33:19.349 Serial Number: 84457541980af0040e90 00:33:19.349 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:19.349 Firmware Version: 6.8.9-20 00:33:19.349 Recommended Arb Burst: 6 00:33:19.349 IEEE OUI Identifier: 00 00 00 00:33:19.349 Multi-path I/O 00:33:19.349 May have multiple subsystem ports: Yes 00:33:19.349 May have multiple controllers: Yes 00:33:19.349 Associated with SR-IOV VF: No 00:33:19.349 Max Data Transfer Size: Unlimited 00:33:19.349 Max Number of Namespaces: 1024 00:33:19.349 Max Number of I/O Queues: 128 00:33:19.349 NVMe Specification Version (VS): 1.3 00:33:19.349 NVMe Specification Version (Identify): 1.3 00:33:19.349 Maximum Queue Entries: 1024 00:33:19.349 Contiguous Queues Required: No 00:33:19.349 Arbitration Mechanisms Supported 00:33:19.349 Weighted Round Robin: Not Supported 00:33:19.349 Vendor Specific: Not Supported 00:33:19.349 Reset Timeout: 7500 ms 00:33:19.349 Doorbell Stride: 4 bytes 00:33:19.349 NVM Subsystem Reset: Not Supported 00:33:19.349 Command Sets Supported 00:33:19.349 NVM Command Set: Supported 00:33:19.349 Boot Partition: Not Supported 00:33:19.349 Memory Page Size Minimum: 4096 bytes 00:33:19.349 Memory Page Size Maximum: 4096 bytes 00:33:19.349 Persistent Memory Region: Not Supported 00:33:19.349 Optional Asynchronous Events Supported 00:33:19.349 Namespace Attribute Notices: Supported 00:33:19.349 Firmware Activation Notices: Not Supported 00:33:19.349 ANA Change Notices: Supported 00:33:19.349 PLE Aggregate Log Change Notices: Not Supported 00:33:19.349 LBA Status Info Alert Notices: Not Supported 00:33:19.349 EGE Aggregate Log Change Notices: Not Supported 00:33:19.349 Normal NVM Subsystem Shutdown event: Not Supported 00:33:19.349 Zone Descriptor Change Notices: Not Supported 00:33:19.349 Discovery Log Change Notices: Not Supported 00:33:19.349 Controller Attributes 00:33:19.349 128-bit Host Identifier: Supported 00:33:19.349 Non-Operational Permissive Mode: Not Supported 00:33:19.349 NVM Sets: Not Supported 00:33:19.349 Read Recovery Levels: Not Supported 00:33:19.349 Endurance Groups: Not Supported 00:33:19.349 Predictable Latency Mode: Not Supported 00:33:19.350 Traffic Based Keep ALive: Supported 00:33:19.350 Namespace Granularity: Not Supported 00:33:19.350 SQ Associations: Not Supported 00:33:19.350 UUID List: Not Supported 00:33:19.350 Multi-Domain Subsystem: Not Supported 00:33:19.350 Fixed Capacity Management: Not Supported 00:33:19.350 Variable Capacity Management: Not Supported 00:33:19.350 Delete Endurance Group: Not Supported 00:33:19.350 Delete NVM Set: Not Supported 00:33:19.350 Extended LBA Formats Supported: Not Supported 00:33:19.350 Flexible Data Placement Supported: Not Supported 00:33:19.350 00:33:19.350 Controller Memory Buffer Support 00:33:19.350 ================================ 00:33:19.350 Supported: No 00:33:19.350 00:33:19.350 Persistent Memory Region Support 00:33:19.350 ================================ 00:33:19.350 Supported: No 00:33:19.350 00:33:19.350 Admin Command Set Attributes 00:33:19.350 ============================ 00:33:19.350 Security Send/Receive: Not Supported 00:33:19.350 Format NVM: Not Supported 00:33:19.350 Firmware Activate/Download: Not Supported 00:33:19.350 Namespace Management: Not Supported 00:33:19.350 Device Self-Test: Not Supported 00:33:19.350 Directives: Not Supported 00:33:19.350 NVMe-MI: Not Supported 00:33:19.350 Virtualization Management: Not Supported 00:33:19.350 Doorbell Buffer Config: Not Supported 00:33:19.350 Get LBA Status Capability: Not Supported 00:33:19.350 Command & Feature Lockdown Capability: Not Supported 00:33:19.350 Abort Command Limit: 4 00:33:19.350 Async Event Request Limit: 4 00:33:19.350 Number of Firmware Slots: N/A 00:33:19.350 Firmware Slot 1 Read-Only: N/A 00:33:19.350 Firmware Activation Without Reset: N/A 00:33:19.350 Multiple Update Detection Support: N/A 00:33:19.350 Firmware Update Granularity: No Information Provided 00:33:19.350 Per-Namespace SMART Log: Yes 00:33:19.350 Asymmetric Namespace Access Log Page: Supported 00:33:19.350 ANA Transition Time : 10 sec 00:33:19.350 00:33:19.350 Asymmetric Namespace Access Capabilities 00:33:19.350 ANA Optimized State : Supported 00:33:19.350 ANA Non-Optimized State : Supported 00:33:19.350 ANA Inaccessible State : Supported 00:33:19.350 ANA Persistent Loss State : Supported 00:33:19.350 ANA Change State : Supported 00:33:19.350 ANAGRPID is not changed : No 00:33:19.350 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:19.350 00:33:19.350 ANA Group Identifier Maximum : 128 00:33:19.350 Number of ANA Group Identifiers : 128 00:33:19.350 Max Number of Allowed Namespaces : 1024 00:33:19.350 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:19.350 Command Effects Log Page: Supported 00:33:19.350 Get Log Page Extended Data: Supported 00:33:19.350 Telemetry Log Pages: Not Supported 00:33:19.350 Persistent Event Log Pages: Not Supported 00:33:19.350 Supported Log Pages Log Page: May Support 00:33:19.350 Commands Supported & Effects Log Page: Not Supported 00:33:19.350 Feature Identifiers & Effects Log Page:May Support 00:33:19.350 NVMe-MI Commands & Effects Log Page: May Support 00:33:19.350 Data Area 4 for Telemetry Log: Not Supported 00:33:19.350 Error Log Page Entries Supported: 128 00:33:19.350 Keep Alive: Supported 00:33:19.350 Keep Alive Granularity: 1000 ms 00:33:19.350 00:33:19.350 NVM Command Set Attributes 00:33:19.350 ========================== 00:33:19.350 Submission Queue Entry Size 00:33:19.350 Max: 64 00:33:19.350 Min: 64 00:33:19.350 Completion Queue Entry Size 00:33:19.350 Max: 16 00:33:19.350 Min: 16 00:33:19.350 Number of Namespaces: 1024 00:33:19.350 Compare Command: Not Supported 00:33:19.350 Write Uncorrectable Command: Not Supported 00:33:19.350 Dataset Management Command: Supported 00:33:19.350 Write Zeroes Command: Supported 00:33:19.350 Set Features Save Field: Not Supported 00:33:19.350 Reservations: Not Supported 00:33:19.350 Timestamp: Not Supported 00:33:19.350 Copy: Not Supported 00:33:19.350 Volatile Write Cache: Present 00:33:19.350 Atomic Write Unit (Normal): 1 00:33:19.350 Atomic Write Unit (PFail): 1 00:33:19.350 Atomic Compare & Write Unit: 1 00:33:19.350 Fused Compare & Write: Not Supported 00:33:19.350 Scatter-Gather List 00:33:19.350 SGL Command Set: Supported 00:33:19.350 SGL Keyed: Not Supported 00:33:19.350 SGL Bit Bucket Descriptor: Not Supported 00:33:19.350 SGL Metadata Pointer: Not Supported 00:33:19.350 Oversized SGL: Not Supported 00:33:19.350 SGL Metadata Address: Not Supported 00:33:19.350 SGL Offset: Supported 00:33:19.350 Transport SGL Data Block: Not Supported 00:33:19.350 Replay Protected Memory Block: Not Supported 00:33:19.350 00:33:19.350 Firmware Slot Information 00:33:19.350 ========================= 00:33:19.350 Active slot: 0 00:33:19.350 00:33:19.350 Asymmetric Namespace Access 00:33:19.350 =========================== 00:33:19.350 Change Count : 0 00:33:19.350 Number of ANA Group Descriptors : 1 00:33:19.350 ANA Group Descriptor : 0 00:33:19.350 ANA Group ID : 1 00:33:19.350 Number of NSID Values : 1 00:33:19.350 Change Count : 0 00:33:19.350 ANA State : 1 00:33:19.350 Namespace Identifier : 1 00:33:19.350 00:33:19.350 Commands Supported and Effects 00:33:19.350 ============================== 00:33:19.350 Admin Commands 00:33:19.350 -------------- 00:33:19.350 Get Log Page (02h): Supported 00:33:19.350 Identify (06h): Supported 00:33:19.350 Abort (08h): Supported 00:33:19.350 Set Features (09h): Supported 00:33:19.350 Get Features (0Ah): Supported 00:33:19.350 Asynchronous Event Request (0Ch): Supported 00:33:19.350 Keep Alive (18h): Supported 00:33:19.350 I/O Commands 00:33:19.350 ------------ 00:33:19.350 Flush (00h): Supported 00:33:19.350 Write (01h): Supported LBA-Change 00:33:19.350 Read (02h): Supported 00:33:19.350 Write Zeroes (08h): Supported LBA-Change 00:33:19.350 Dataset Management (09h): Supported 00:33:19.350 00:33:19.350 Error Log 00:33:19.350 ========= 00:33:19.350 Entry: 0 00:33:19.350 Error Count: 0x3 00:33:19.350 Submission Queue Id: 0x0 00:33:19.350 Command Id: 0x5 00:33:19.350 Phase Bit: 0 00:33:19.350 Status Code: 0x2 00:33:19.350 Status Code Type: 0x0 00:33:19.350 Do Not Retry: 1 00:33:19.350 Error Location: 0x28 00:33:19.350 LBA: 0x0 00:33:19.350 Namespace: 0x0 00:33:19.350 Vendor Log Page: 0x0 00:33:19.350 ----------- 00:33:19.350 Entry: 1 00:33:19.350 Error Count: 0x2 00:33:19.350 Submission Queue Id: 0x0 00:33:19.350 Command Id: 0x5 00:33:19.350 Phase Bit: 0 00:33:19.350 Status Code: 0x2 00:33:19.350 Status Code Type: 0x0 00:33:19.350 Do Not Retry: 1 00:33:19.350 Error Location: 0x28 00:33:19.350 LBA: 0x0 00:33:19.350 Namespace: 0x0 00:33:19.350 Vendor Log Page: 0x0 00:33:19.350 ----------- 00:33:19.350 Entry: 2 00:33:19.350 Error Count: 0x1 00:33:19.350 Submission Queue Id: 0x0 00:33:19.350 Command Id: 0x4 00:33:19.350 Phase Bit: 0 00:33:19.350 Status Code: 0x2 00:33:19.350 Status Code Type: 0x0 00:33:19.350 Do Not Retry: 1 00:33:19.350 Error Location: 0x28 00:33:19.350 LBA: 0x0 00:33:19.350 Namespace: 0x0 00:33:19.350 Vendor Log Page: 0x0 00:33:19.350 00:33:19.350 Number of Queues 00:33:19.350 ================ 00:33:19.350 Number of I/O Submission Queues: 128 00:33:19.350 Number of I/O Completion Queues: 128 00:33:19.350 00:33:19.350 ZNS Specific Controller Data 00:33:19.350 ============================ 00:33:19.350 Zone Append Size Limit: 0 00:33:19.350 00:33:19.350 00:33:19.350 Active Namespaces 00:33:19.350 ================= 00:33:19.350 get_feature(0x05) failed 00:33:19.350 Namespace ID:1 00:33:19.350 Command Set Identifier: NVM (00h) 00:33:19.350 Deallocate: Supported 00:33:19.350 Deallocated/Unwritten Error: Not Supported 00:33:19.350 Deallocated Read Value: Unknown 00:33:19.350 Deallocate in Write Zeroes: Not Supported 00:33:19.350 Deallocated Guard Field: 0xFFFF 00:33:19.350 Flush: Supported 00:33:19.350 Reservation: Not Supported 00:33:19.350 Namespace Sharing Capabilities: Multiple Controllers 00:33:19.350 Size (in LBAs): 1953525168 (931GiB) 00:33:19.350 Capacity (in LBAs): 1953525168 (931GiB) 00:33:19.350 Utilization (in LBAs): 1953525168 (931GiB) 00:33:19.350 UUID: c702c5c9-63a7-41fb-992e-c3e69755263c 00:33:19.350 Thin Provisioning: Not Supported 00:33:19.350 Per-NS Atomic Units: Yes 00:33:19.350 Atomic Boundary Size (Normal): 0 00:33:19.350 Atomic Boundary Size (PFail): 0 00:33:19.350 Atomic Boundary Offset: 0 00:33:19.350 NGUID/EUI64 Never Reused: No 00:33:19.350 ANA group ID: 1 00:33:19.350 Namespace Write Protected: No 00:33:19.350 Number of LBA Formats: 1 00:33:19.350 Current LBA Format: LBA Format #00 00:33:19.351 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:19.351 00:33:19.351 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:19.351 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:19.351 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:19.351 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:19.351 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:19.351 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:19.351 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:19.351 rmmod nvme_tcp 00:33:19.351 rmmod nvme_fabrics 00:33:19.351 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:19.351 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:19.351 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:19.351 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:33:19.351 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:19.351 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:19.351 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:19.351 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:19.351 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:33:19.351 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:19.351 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:33:19.610 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:19.610 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:19.610 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.610 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:19.610 11:28:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.517 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:21.517 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:21.517 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:21.517 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:33:21.517 11:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:21.517 11:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:21.517 11:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:21.517 11:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:21.517 11:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:33:21.517 11:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:33:21.517 11:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:24.052 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:24.052 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:24.052 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:24.052 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:24.052 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:24.052 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:24.052 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:24.052 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:24.052 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:24.052 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:24.052 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:24.052 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:24.052 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:24.052 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:24.052 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:24.052 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:24.986 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:33:24.986 00:33:24.986 real 0m15.179s 00:33:24.986 user 0m3.808s 00:33:24.986 sys 0m7.718s 00:33:24.986 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:24.986 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:24.986 ************************************ 00:33:24.986 END TEST nvmf_identify_kernel_target 00:33:24.986 ************************************ 00:33:24.986 11:28:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:24.986 11:28:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:24.986 11:28:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:24.986 11:28:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.986 ************************************ 00:33:24.986 START TEST nvmf_auth_host 00:33:24.986 ************************************ 00:33:24.986 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:25.246 * Looking for test storage... 00:33:25.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:25.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.246 --rc genhtml_branch_coverage=1 00:33:25.246 --rc genhtml_function_coverage=1 00:33:25.246 --rc genhtml_legend=1 00:33:25.246 --rc geninfo_all_blocks=1 00:33:25.246 --rc geninfo_unexecuted_blocks=1 00:33:25.246 00:33:25.246 ' 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:25.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.246 --rc genhtml_branch_coverage=1 00:33:25.246 --rc genhtml_function_coverage=1 00:33:25.246 --rc genhtml_legend=1 00:33:25.246 --rc geninfo_all_blocks=1 00:33:25.246 --rc geninfo_unexecuted_blocks=1 00:33:25.246 00:33:25.246 ' 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:25.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.246 --rc genhtml_branch_coverage=1 00:33:25.246 --rc genhtml_function_coverage=1 00:33:25.246 --rc genhtml_legend=1 00:33:25.246 --rc geninfo_all_blocks=1 00:33:25.246 --rc geninfo_unexecuted_blocks=1 00:33:25.246 00:33:25.246 ' 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:25.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.246 --rc genhtml_branch_coverage=1 00:33:25.246 --rc genhtml_function_coverage=1 00:33:25.246 --rc genhtml_legend=1 00:33:25.246 --rc geninfo_all_blocks=1 00:33:25.246 --rc geninfo_unexecuted_blocks=1 00:33:25.246 00:33:25.246 ' 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:25.246 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:25.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:25.247 11:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:30.523 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.523 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:30.524 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:30.524 Found net devices under 0000:af:00.0: cvl_0_0 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:30.524 Found net devices under 0000:af:00.1: cvl_0_1 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:30.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:30.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:33:30.524 00:33:30.524 --- 10.0.0.2 ping statistics --- 00:33:30.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.524 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:30.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:30.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:33:30.524 00:33:30.524 --- 10.0.0.1 ping statistics --- 00:33:30.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.524 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=2244026 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 2244026 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2244026 ']' 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:30.524 11:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.524 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:30.524 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:33:30.524 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:30.524 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:30.524 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.524 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:30.524 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:30.524 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:30.524 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:30.524 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:30.524 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:30.524 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:33:30.524 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:30.524 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:30.524 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=7cd82e75362ee0c0f465ec0548996fab 00:33:30.524 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.IiM 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 7cd82e75362ee0c0f465ec0548996fab 0 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 7cd82e75362ee0c0f465ec0548996fab 0 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=7cd82e75362ee0c0f465ec0548996fab 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.IiM 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.IiM 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.IiM 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=fe3967e22c1c90a696114c0fb2171f690093377cf682ba891512b36520041c15 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.HFs 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key fe3967e22c1c90a696114c0fb2171f690093377cf682ba891512b36520041c15 3 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 fe3967e22c1c90a696114c0fb2171f690093377cf682ba891512b36520041c15 3 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=fe3967e22c1c90a696114c0fb2171f690093377cf682ba891512b36520041c15 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.HFs 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.HFs 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.HFs 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=1e42c9d3a99b48395c722ee5b9dacef5f1db2c54e5f416e2 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.nKh 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 1e42c9d3a99b48395c722ee5b9dacef5f1db2c54e5f416e2 0 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 1e42c9d3a99b48395c722ee5b9dacef5f1db2c54e5f416e2 0 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=1e42c9d3a99b48395c722ee5b9dacef5f1db2c54e5f416e2 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.nKh 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.nKh 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.nKh 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=09bcd4cd05ac6044c0db426f08269439c80690816a55d468 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.gzZ 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 09bcd4cd05ac6044c0db426f08269439c80690816a55d468 2 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 09bcd4cd05ac6044c0db426f08269439c80690816a55d468 2 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=09bcd4cd05ac6044c0db426f08269439c80690816a55d468 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.gzZ 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.gzZ 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.gzZ 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:30.785 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=4decfb9b786b77bacc872b0522c7cdff 00:33:30.786 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:33:30.786 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.fWD 00:33:30.786 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 4decfb9b786b77bacc872b0522c7cdff 1 00:33:30.786 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 4decfb9b786b77bacc872b0522c7cdff 1 00:33:30.786 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:30.786 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:30.786 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=4decfb9b786b77bacc872b0522c7cdff 00:33:30.786 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:33:30.786 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.fWD 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.fWD 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.fWD 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=821e8af71cbc0fb343d79b79e0c85cd1 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.0av 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 821e8af71cbc0fb343d79b79e0c85cd1 1 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 821e8af71cbc0fb343d79b79e0c85cd1 1 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=821e8af71cbc0fb343d79b79e0c85cd1 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.0av 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.0av 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.0av 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6aaf3272c5316f751686e2cec0e2108b93d7433188067fb9 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.FBY 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6aaf3272c5316f751686e2cec0e2108b93d7433188067fb9 2 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6aaf3272c5316f751686e2cec0e2108b93d7433188067fb9 2 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6aaf3272c5316f751686e2cec0e2108b93d7433188067fb9 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.FBY 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.FBY 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.FBY 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=65494e126a1969dc22f2f46aaa00d4f4 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.8Id 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 65494e126a1969dc22f2f46aaa00d4f4 0 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 65494e126a1969dc22f2f46aaa00d4f4 0 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=65494e126a1969dc22f2f46aaa00d4f4 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.8Id 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.8Id 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.8Id 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=7213f80dae49bcdefeaf51a17e1c3f0df3b3bb0b81e4c820c58446bcf7faa58a 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.dxu 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 7213f80dae49bcdefeaf51a17e1c3f0df3b3bb0b81e4c820c58446bcf7faa58a 3 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 7213f80dae49bcdefeaf51a17e1c3f0df3b3bb0b81e4c820c58446bcf7faa58a 3 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=7213f80dae49bcdefeaf51a17e1c3f0df3b3bb0b81e4c820c58446bcf7faa58a 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:33:31.044 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.dxu 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.dxu 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.dxu 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2244026 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2244026 ']' 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.IiM 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.HFs ]] 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.HFs 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.nKh 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.gzZ ]] 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gzZ 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.fWD 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.0av ]] 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0av 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.FBY 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.8Id ]] 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.8Id 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.303 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.dxu 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:31.563 11:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:34.094 Waiting for block devices as requested 00:33:34.094 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:34.094 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:34.353 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:34.353 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:34.353 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:34.353 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:34.613 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:34.613 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:34.613 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:34.613 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:34.873 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:34.873 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:34.873 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:35.132 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:35.132 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:35.132 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:35.132 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:36.071 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:33:36.071 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:36.071 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:33:36.071 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:36.071 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:36.071 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:36.071 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:33:36.071 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:36.071 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:36.071 No valid GPT data, bailing 00:33:36.071 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:36.071 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:33:36.071 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:33:36.072 00:33:36.072 Discovery Log Number of Records 2, Generation counter 2 00:33:36.072 =====Discovery Log Entry 0====== 00:33:36.072 trtype: tcp 00:33:36.072 adrfam: ipv4 00:33:36.072 subtype: current discovery subsystem 00:33:36.072 treq: not specified, sq flow control disable supported 00:33:36.072 portid: 1 00:33:36.072 trsvcid: 4420 00:33:36.072 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:36.072 traddr: 10.0.0.1 00:33:36.072 eflags: none 00:33:36.072 sectype: none 00:33:36.072 =====Discovery Log Entry 1====== 00:33:36.072 trtype: tcp 00:33:36.072 adrfam: ipv4 00:33:36.072 subtype: nvme subsystem 00:33:36.072 treq: not specified, sq flow control disable supported 00:33:36.072 portid: 1 00:33:36.072 trsvcid: 4420 00:33:36.072 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:36.072 traddr: 10.0.0.1 00:33:36.072 eflags: none 00:33:36.072 sectype: none 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: ]] 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.072 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.332 nvme0n1 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: ]] 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:36.332 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:36.333 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:36.333 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:36.333 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:36.333 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:36.333 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:36.333 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:36.333 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:36.333 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:36.333 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:36.333 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:36.333 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.333 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.333 nvme0n1 00:33:36.333 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.333 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:36.333 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:36.333 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.333 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.333 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: ]] 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.593 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.593 nvme0n1 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: ]] 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.593 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.853 nvme0n1 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: ]] 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:36.853 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:36.854 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:36.854 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:36.854 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:36.854 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:36.854 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:36.854 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:36.854 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:36.854 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.854 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.113 nvme0n1 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:37.113 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:37.114 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:37.114 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:37.114 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:37.114 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:37.114 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.114 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.374 nvme0n1 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: ]] 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.374 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.635 nvme0n1 00:33:37.635 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.635 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:37.635 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:37.635 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.635 11:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: ]] 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.635 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.894 nvme0n1 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: ]] 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.895 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.155 nvme0n1 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: ]] 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.155 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.414 nvme0n1 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:38.414 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:38.415 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:38.415 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.415 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.415 nvme0n1 00:33:38.415 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.415 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:38.415 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.415 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.415 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.415 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: ]] 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.674 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.933 nvme0n1 00:33:38.933 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.933 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:38.933 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.933 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.933 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.933 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.933 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.933 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.933 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.933 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.933 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.933 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:38.933 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:33:38.933 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.933 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: ]] 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.934 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.195 nvme0n1 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: ]] 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.195 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.454 nvme0n1 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: ]] 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:39.454 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:39.455 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.455 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.455 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.455 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:39.455 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:39.455 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:39.455 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:39.455 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.455 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.455 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:39.455 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:39.455 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:39.455 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:39.455 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:39.455 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:39.455 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.455 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.715 nvme0n1 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.715 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.974 nvme0n1 00:33:39.974 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.974 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:39.974 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:39.974 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.974 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.974 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: ]] 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.234 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.494 nvme0n1 00:33:40.494 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.494 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.494 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.494 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.494 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:40.494 11:28:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: ]] 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.494 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.062 nvme0n1 00:33:41.062 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.062 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:41.062 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:41.062 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.062 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.062 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.062 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.062 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:41.062 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.062 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: ]] 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.063 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.322 nvme0n1 00:33:41.322 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.322 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:41.322 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:41.322 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.322 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.322 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.322 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.322 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:41.322 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.322 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.582 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.582 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:41.582 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:41.582 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:41.582 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:41.582 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:41.582 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:41.582 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:41.582 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:41.582 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:41.582 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:41.582 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:41.582 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: ]] 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.583 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.842 nvme0n1 00:33:41.842 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.842 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:41.842 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.842 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.842 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:41.842 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.842 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.842 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:41.842 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.842 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.842 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.842 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:41.842 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:41.842 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:41.842 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.843 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.412 nvme0n1 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:42.412 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: ]] 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.413 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.983 nvme0n1 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: ]] 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.983 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.553 nvme0n1 00:33:43.553 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: ]] 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.553 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.122 nvme0n1 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: ]] 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.122 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.382 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.382 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.382 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:44.382 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:44.382 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:44.382 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.382 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.382 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:44.382 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.382 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:44.382 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:44.382 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:44.382 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:44.382 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.382 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.950 nvme0n1 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.950 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.519 nvme0n1 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: ]] 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.519 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.778 nvme0n1 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: ]] 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.779 nvme0n1 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.779 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: ]] 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.039 nvme0n1 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: ]] 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.039 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.299 nvme0n1 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.299 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.558 nvme0n1 00:33:46.558 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.558 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.558 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.558 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.558 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.558 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.558 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.558 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.558 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.558 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: ]] 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.559 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.818 nvme0n1 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: ]] 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:46.818 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:46.819 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:46.819 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:46.819 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.819 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.078 nvme0n1 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: ]] 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.078 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.336 nvme0n1 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: ]] 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.336 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.595 nvme0n1 00:33:47.595 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.596 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.596 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.596 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.596 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.596 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.596 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.855 nvme0n1 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:47.855 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: ]] 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.856 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.115 nvme0n1 00:33:48.115 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.115 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:48.115 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:48.115 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.115 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.115 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.115 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:48.115 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:48.115 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.115 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.115 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.115 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:48.115 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:48.115 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:48.115 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:48.115 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:48.115 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:48.115 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: ]] 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.116 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.375 nvme0n1 00:33:48.375 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.375 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:48.375 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:48.375 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.375 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.375 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.375 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:48.375 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: ]] 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.376 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.635 nvme0n1 00:33:48.635 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.635 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:48.635 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:48.635 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.635 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.635 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.635 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:48.635 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:48.635 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.635 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.894 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.894 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:48.894 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:48.894 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:48.894 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:48.894 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:48.894 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:48.894 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:48.894 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:48.894 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:48.894 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:48.894 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:48.894 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: ]] 00:33:48.894 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:48.894 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:48.894 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.895 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.154 nvme0n1 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:49.154 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.155 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.415 nvme0n1 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: ]] 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.415 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.701 nvme0n1 00:33:49.701 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.701 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.701 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.701 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.701 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.701 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: ]] 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:50.014 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.015 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.293 nvme0n1 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: ]] 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:50.293 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.294 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.294 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.294 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.294 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:50.294 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:50.294 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:50.294 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.294 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.294 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:50.294 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.294 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:50.294 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:50.294 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:50.294 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:50.294 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.294 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.552 nvme0n1 00:33:50.552 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.552 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.552 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.552 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.552 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: ]] 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.811 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.812 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.812 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.812 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:50.812 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:50.812 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:50.812 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.812 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.812 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:50.812 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.812 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:50.812 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:50.812 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:50.812 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:50.812 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.812 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.070 nvme0n1 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.070 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.639 nvme0n1 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: ]] 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.639 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.640 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.640 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:51.640 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:51.640 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:51.640 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:51.640 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.640 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.640 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:51.640 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.640 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:51.640 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:51.640 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:51.640 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:51.640 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.640 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.209 nvme0n1 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: ]] 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.209 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.778 nvme0n1 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: ]] 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:52.778 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:52.779 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:52.779 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.779 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.038 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.038 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:53.038 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:53.038 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:53.038 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:53.038 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.038 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.038 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:53.038 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:53.038 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:53.038 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:53.038 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:53.038 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:53.038 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.038 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.607 nvme0n1 00:33:53.607 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.607 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.607 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.607 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.607 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.607 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.607 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.607 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:53.607 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.607 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.607 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.607 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:53.607 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:53.607 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:53.607 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:53.608 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:53.608 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:53.608 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:53.608 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:53.608 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:53.608 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:53.608 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:53.608 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: ]] 00:33:53.608 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:53.608 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:53.608 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:53.608 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:53.608 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:53.608 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:53.608 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:53.608 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:53.608 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.608 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.608 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.608 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:53.608 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:53.608 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:53.608 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:53.608 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.608 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.608 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:53.608 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:53.608 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:53.608 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:53.608 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:53.608 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:53.608 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.608 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.177 nvme0n1 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.177 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.746 nvme0n1 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: ]] 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:54.746 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.747 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:54.747 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.747 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.747 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.747 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.747 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:54.747 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:54.747 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:54.747 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.747 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.747 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:54.747 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.747 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:54.747 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:54.747 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:54.747 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:54.747 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.747 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.007 nvme0n1 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: ]] 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.007 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.267 nvme0n1 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: ]] 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.267 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.527 nvme0n1 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: ]] 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.527 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.527 nvme0n1 00:33:55.527 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.527 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.527 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.528 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.528 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.528 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.787 nvme0n1 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.787 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:55.788 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:55.788 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:55.788 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:55.788 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:55.788 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:55.788 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:55.788 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:55.788 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: ]] 00:33:55.788 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:55.788 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:55.788 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.788 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:55.788 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:55.788 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:55.788 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.788 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:55.788 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.788 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.048 nvme0n1 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: ]] 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:56.048 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.308 nvme0n1 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: ]] 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.308 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.568 nvme0n1 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: ]] 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.568 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.828 nvme0n1 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.828 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.087 nvme0n1 00:33:57.087 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.087 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.087 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.087 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.087 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.087 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.087 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.087 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.087 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.087 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.087 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.087 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:57.087 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.087 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:57.087 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.087 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:57.087 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:57.087 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: ]] 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.088 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.347 nvme0n1 00:33:57.347 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.347 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.347 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.347 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.347 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.347 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.347 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.347 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.347 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.347 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.347 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.347 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.347 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:57.347 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: ]] 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.348 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.607 nvme0n1 00:33:57.607 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.607 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.607 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.607 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.607 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.607 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: ]] 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.866 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.126 nvme0n1 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: ]] 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.126 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.386 nvme0n1 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.386 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.645 nvme0n1 00:33:58.645 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.645 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.645 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.645 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.645 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.645 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.645 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.645 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.645 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.645 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.645 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.645 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:58.645 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.645 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:58.645 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.645 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:58.645 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:58.645 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: ]] 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.646 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.214 nvme0n1 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: ]] 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.214 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.473 nvme0n1 00:33:59.473 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.473 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.473 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.474 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.474 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.474 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.733 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.733 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.733 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.733 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.733 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.733 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.733 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:59.733 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.733 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:59.733 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:59.733 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:59.733 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: ]] 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.734 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.993 nvme0n1 00:33:59.993 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.993 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.993 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.993 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.993 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.993 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.993 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.993 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.993 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.993 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: ]] 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.994 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.563 nvme0n1 00:34:00.563 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.563 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.563 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.564 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.824 nvme0n1 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:00.824 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NkODJlNzUzNjJlZTBjMGY0NjVlYzA1NDg5OTZmYWJbiQ6P: 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: ]] 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmUzOTY3ZTIyYzFjOTBhNjk2MTE0YzBmYjIxNzFmNjkwMDkzMzc3Y2Y2ODJiYTg5MTUxMmIzNjUyMDA0MWMxNTjcXek=: 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.825 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.393 nvme0n1 00:34:01.393 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.393 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.393 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.393 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.393 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.393 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.652 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.652 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.652 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.652 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.652 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.652 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.652 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:01.652 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.652 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:01.652 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:01.652 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:01.652 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:34:01.652 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:34:01.652 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:01.652 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:01.652 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:34:01.652 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: ]] 00:34:01.652 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:34:01.653 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:01.653 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.653 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:01.653 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:01.653 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:01.653 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.653 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:01.653 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.653 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.653 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.653 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.653 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:01.653 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:01.653 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:01.653 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.653 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.653 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:01.653 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.653 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:01.653 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:01.653 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:01.653 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:01.653 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.653 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.221 nvme0n1 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: ]] 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.221 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.788 nvme0n1 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFhZjMyNzJjNTMxNmY3NTE2ODZlMmNlYzBlMjEwOGI5M2Q3NDMzMTg4MDY3ZmI5BzJpNg==: 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: ]] 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU0OTRlMTI2YTE5NjlkYzIyZjJmNDZhYWEwMGQ0ZjS62GkT: 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.789 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.356 nvme0n1 00:34:03.356 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.356 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.356 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.356 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.356 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.356 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxM2Y4MGRhZTQ5YmNkZWZlYWY1MWExN2UxYzNmMGRmM2IzYmIwYjgxZTRjODIwYzU4NDQ2YmNmN2ZhYTU4YV3CH/0=: 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.615 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.184 nvme0n1 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: ]] 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.184 request: 00:34:04.184 { 00:34:04.184 "name": "nvme0", 00:34:04.184 "trtype": "tcp", 00:34:04.184 "traddr": "10.0.0.1", 00:34:04.184 "adrfam": "ipv4", 00:34:04.184 "trsvcid": "4420", 00:34:04.184 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:04.184 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:04.184 "prchk_reftag": false, 00:34:04.184 "prchk_guard": false, 00:34:04.184 "hdgst": false, 00:34:04.184 "ddgst": false, 00:34:04.184 "allow_unrecognized_csi": false, 00:34:04.184 "method": "bdev_nvme_attach_controller", 00:34:04.184 "req_id": 1 00:34:04.184 } 00:34:04.184 Got JSON-RPC error response 00:34:04.184 response: 00:34:04.184 { 00:34:04.184 "code": -5, 00:34:04.184 "message": "Input/output error" 00:34:04.184 } 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:04.184 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.185 request: 00:34:04.185 { 00:34:04.185 "name": "nvme0", 00:34:04.185 "trtype": "tcp", 00:34:04.185 "traddr": "10.0.0.1", 00:34:04.185 "adrfam": "ipv4", 00:34:04.185 "trsvcid": "4420", 00:34:04.185 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:04.185 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:04.185 "prchk_reftag": false, 00:34:04.185 "prchk_guard": false, 00:34:04.185 "hdgst": false, 00:34:04.185 "ddgst": false, 00:34:04.185 "dhchap_key": "key2", 00:34:04.185 "allow_unrecognized_csi": false, 00:34:04.185 "method": "bdev_nvme_attach_controller", 00:34:04.185 "req_id": 1 00:34:04.185 } 00:34:04.185 Got JSON-RPC error response 00:34:04.185 response: 00:34:04.185 { 00:34:04.185 "code": -5, 00:34:04.185 "message": "Input/output error" 00:34:04.185 } 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:04.185 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.444 request: 00:34:04.444 { 00:34:04.444 "name": "nvme0", 00:34:04.444 "trtype": "tcp", 00:34:04.444 "traddr": "10.0.0.1", 00:34:04.444 "adrfam": "ipv4", 00:34:04.444 "trsvcid": "4420", 00:34:04.444 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:04.444 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:04.444 "prchk_reftag": false, 00:34:04.444 "prchk_guard": false, 00:34:04.444 "hdgst": false, 00:34:04.444 "ddgst": false, 00:34:04.444 "dhchap_key": "key1", 00:34:04.444 "dhchap_ctrlr_key": "ckey2", 00:34:04.444 "allow_unrecognized_csi": false, 00:34:04.444 "method": "bdev_nvme_attach_controller", 00:34:04.444 "req_id": 1 00:34:04.444 } 00:34:04.444 Got JSON-RPC error response 00:34:04.444 response: 00:34:04.444 { 00:34:04.444 "code": -5, 00:34:04.444 "message": "Input/output error" 00:34:04.444 } 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.444 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.444 nvme0n1 00:34:04.444 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.444 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:04.444 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: ]] 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:04.703 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:04.704 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:04.704 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.704 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.704 request: 00:34:04.704 { 00:34:04.704 "name": "nvme0", 00:34:04.704 "dhchap_key": "key1", 00:34:04.704 "dhchap_ctrlr_key": "ckey2", 00:34:04.704 "method": "bdev_nvme_set_keys", 00:34:04.704 "req_id": 1 00:34:04.704 } 00:34:04.704 Got JSON-RPC error response 00:34:04.704 response: 00:34:04.704 { 00:34:04.704 "code": -13, 00:34:04.704 "message": "Permission denied" 00:34:04.704 } 00:34:04.704 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:04.704 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:04.704 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:04.704 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:04.704 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:04.704 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.704 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:04.704 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.704 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.704 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.704 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:04.704 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:06.081 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.081 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:06.081 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.081 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.081 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.081 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:06.081 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU0MmM5ZDNhOTliNDgzOTVjNzIyZWU1YjlkYWNlZjVmMWRiMmM1NGU1ZjQxNmUyb6Rrcw==: 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: ]] 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDliY2Q0Y2QwNWFjNjA0NGMwZGI0MjZmMDgyNjk0MzljODA2OTA4MTZhNTVkNDY4vqpKvQ==: 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:07.018 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.019 nvme0n1 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlY2ZiOWI3ODZiNzdiYWNjODcyYjA1MjJjN2NkZma8vOAA: 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: ]] 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIxZThhZjcxY2JjMGZiMzQzZDc5Yjc5ZTBjODVjZDEjnxP/: 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.019 request: 00:34:07.019 { 00:34:07.019 "name": "nvme0", 00:34:07.019 "dhchap_key": "key2", 00:34:07.019 "dhchap_ctrlr_key": "ckey1", 00:34:07.019 "method": "bdev_nvme_set_keys", 00:34:07.019 "req_id": 1 00:34:07.019 } 00:34:07.019 Got JSON-RPC error response 00:34:07.019 response: 00:34:07.019 { 00:34:07.019 "code": -13, 00:34:07.019 "message": "Permission denied" 00:34:07.019 } 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.019 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.278 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:07.278 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:08.214 rmmod nvme_tcp 00:34:08.214 rmmod nvme_fabrics 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 2244026 ']' 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 2244026 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2244026 ']' 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2244026 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2244026 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2244026' 00:34:08.214 killing process with pid 2244026 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2244026 00:34:08.214 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2244026 00:34:08.474 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:08.474 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:08.474 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:08.474 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:08.474 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:34:08.474 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:08.474 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:34:08.474 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:08.474 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:08.474 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.474 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:08.474 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.010 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:11.010 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:11.010 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:11.010 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:11.010 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:11.010 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:34:11.010 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:11.010 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:11.010 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:11.010 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:11.010 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:34:11.010 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:34:11.010 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:12.917 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:12.917 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:12.917 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:12.917 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:12.917 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:13.176 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:13.176 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:13.176 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:13.176 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:13.176 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:13.176 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:13.176 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:13.176 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:13.176 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:13.176 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:13.176 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:14.114 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:14.114 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.IiM /tmp/spdk.key-null.nKh /tmp/spdk.key-sha256.fWD /tmp/spdk.key-sha384.FBY /tmp/spdk.key-sha512.dxu /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:14.114 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:16.652 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:16.652 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:16.652 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:16.652 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:16.652 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:16.652 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:16.652 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:16.652 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:16.652 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:16.652 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:16.652 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:16.652 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:16.652 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:16.652 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:16.652 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:16.652 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:16.652 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:16.652 00:34:16.652 real 0m51.469s 00:34:16.652 user 0m46.571s 00:34:16.652 sys 0m11.396s 00:34:16.652 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:16.652 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.652 ************************************ 00:34:16.652 END TEST nvmf_auth_host 00:34:16.652 ************************************ 00:34:16.652 11:29:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:16.652 11:29:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:16.652 11:29:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:16.652 11:29:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:16.652 11:29:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.652 ************************************ 00:34:16.652 START TEST nvmf_digest 00:34:16.652 ************************************ 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:16.652 * Looking for test storage... 00:34:16.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:16.652 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:16.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.653 --rc genhtml_branch_coverage=1 00:34:16.653 --rc genhtml_function_coverage=1 00:34:16.653 --rc genhtml_legend=1 00:34:16.653 --rc geninfo_all_blocks=1 00:34:16.653 --rc geninfo_unexecuted_blocks=1 00:34:16.653 00:34:16.653 ' 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:16.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.653 --rc genhtml_branch_coverage=1 00:34:16.653 --rc genhtml_function_coverage=1 00:34:16.653 --rc genhtml_legend=1 00:34:16.653 --rc geninfo_all_blocks=1 00:34:16.653 --rc geninfo_unexecuted_blocks=1 00:34:16.653 00:34:16.653 ' 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:16.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.653 --rc genhtml_branch_coverage=1 00:34:16.653 --rc genhtml_function_coverage=1 00:34:16.653 --rc genhtml_legend=1 00:34:16.653 --rc geninfo_all_blocks=1 00:34:16.653 --rc geninfo_unexecuted_blocks=1 00:34:16.653 00:34:16.653 ' 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:16.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.653 --rc genhtml_branch_coverage=1 00:34:16.653 --rc genhtml_function_coverage=1 00:34:16.653 --rc genhtml_legend=1 00:34:16.653 --rc geninfo_all_blocks=1 00:34:16.653 --rc geninfo_unexecuted_blocks=1 00:34:16.653 00:34:16.653 ' 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:16.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:16.653 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.913 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:16.913 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:16.913 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:16.913 11:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:22.188 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:22.188 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.188 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:22.189 Found net devices under 0000:af:00.0: cvl_0_0 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:22.189 Found net devices under 0000:af:00.1: cvl_0_1 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:22.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:22.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:34:22.189 00:34:22.189 --- 10.0.0.2 ping statistics --- 00:34:22.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.189 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:22.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:22.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:34:22.189 00:34:22.189 --- 10.0.0.1 ping statistics --- 00:34:22.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.189 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:22.189 ************************************ 00:34:22.189 START TEST nvmf_digest_clean 00:34:22.189 ************************************ 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=2257524 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 2257524 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2257524 ']' 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:22.189 [2024-10-06 11:29:19.417352] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:34:22.189 [2024-10-06 11:29:19.417391] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.189 [2024-10-06 11:29:19.474714] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.189 [2024-10-06 11:29:19.513203] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.189 [2024-10-06 11:29:19.513241] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.189 [2024-10-06 11:29:19.513247] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.189 [2024-10-06 11:29:19.513253] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.189 [2024-10-06 11:29:19.513258] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.189 [2024-10-06 11:29:19.513778] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:22.189 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:22.190 null0 00:34:22.190 [2024-10-06 11:29:19.678213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:22.190 [2024-10-06 11:29:19.702426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2257543 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2257543 /var/tmp/bperf.sock 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2257543 ']' 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:22.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:22.190 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:22.190 [2024-10-06 11:29:19.751789] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:34:22.190 [2024-10-06 11:29:19.751828] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2257543 ] 00:34:22.449 [2024-10-06 11:29:19.806481] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.449 [2024-10-06 11:29:19.846265] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:22.449 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:22.449 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:22.449 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:22.449 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:22.449 11:29:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:22.708 11:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:22.708 11:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:22.967 nvme0n1 00:34:22.967 11:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:22.967 11:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:22.967 Running I/O for 2 seconds... 00:34:25.281 26775.00 IOPS, 104.59 MiB/s 27158.00 IOPS, 106.09 MiB/s 00:34:25.281 Latency(us) 00:34:25.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:25.281 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:25.281 nvme0n1 : 2.00 27173.32 106.15 0.00 0.00 4706.15 2356.18 13793.77 00:34:25.281 =================================================================================================================== 00:34:25.281 Total : 27173.32 106.15 0.00 0.00 4706.15 2356.18 13793.77 00:34:25.281 { 00:34:25.281 "results": [ 00:34:25.281 { 00:34:25.281 "job": "nvme0n1", 00:34:25.281 "core_mask": "0x2", 00:34:25.281 "workload": "randread", 00:34:25.281 "status": "finished", 00:34:25.281 "queue_depth": 128, 00:34:25.281 "io_size": 4096, 00:34:25.281 "runtime": 2.003583, 00:34:25.281 "iops": 27173.318999013267, 00:34:25.281 "mibps": 106.14577733989557, 00:34:25.281 "io_failed": 0, 00:34:25.281 "io_timeout": 0, 00:34:25.281 "avg_latency_us": 4706.153404494265, 00:34:25.281 "min_latency_us": 2356.175238095238, 00:34:25.281 "max_latency_us": 13793.76761904762 00:34:25.281 } 00:34:25.281 ], 00:34:25.281 "core_count": 1 00:34:25.281 } 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:25.281 | select(.opcode=="crc32c") 00:34:25.281 | "\(.module_name) \(.executed)"' 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2257543 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2257543 ']' 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2257543 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2257543 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2257543' 00:34:25.281 killing process with pid 2257543 00:34:25.281 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2257543 00:34:25.281 Received shutdown signal, test time was about 2.000000 seconds 00:34:25.281 00:34:25.281 Latency(us) 00:34:25.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:25.281 =================================================================================================================== 00:34:25.282 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:25.282 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2257543 00:34:25.540 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:25.540 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:25.540 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:25.540 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:25.540 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:25.540 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:25.540 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:25.540 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2258013 00:34:25.540 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2258013 /var/tmp/bperf.sock 00:34:25.540 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:25.540 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2258013 ']' 00:34:25.540 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:25.540 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:25.540 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:25.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:25.540 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:25.540 11:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:25.540 [2024-10-06 11:29:23.013294] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:34:25.540 [2024-10-06 11:29:23.013343] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2258013 ] 00:34:25.540 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:25.540 Zero copy mechanism will not be used. 00:34:25.540 [2024-10-06 11:29:23.069545] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:25.540 [2024-10-06 11:29:23.105614] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:25.799 11:29:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:25.799 11:29:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:25.799 11:29:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:25.799 11:29:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:25.799 11:29:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:26.058 11:29:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:26.058 11:29:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:26.317 nvme0n1 00:34:26.317 11:29:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:26.317 11:29:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:26.317 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:26.317 Zero copy mechanism will not be used. 00:34:26.317 Running I/O for 2 seconds... 00:34:28.631 5776.00 IOPS, 722.00 MiB/s 5288.00 IOPS, 661.00 MiB/s 00:34:28.631 Latency(us) 00:34:28.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:28.631 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:28.631 nvme0n1 : 2.00 5286.42 660.80 0.00 0.00 3024.23 663.16 11421.99 00:34:28.631 =================================================================================================================== 00:34:28.631 Total : 5286.42 660.80 0.00 0.00 3024.23 663.16 11421.99 00:34:28.631 { 00:34:28.631 "results": [ 00:34:28.631 { 00:34:28.631 "job": "nvme0n1", 00:34:28.631 "core_mask": "0x2", 00:34:28.631 "workload": "randread", 00:34:28.631 "status": "finished", 00:34:28.631 "queue_depth": 16, 00:34:28.631 "io_size": 131072, 00:34:28.631 "runtime": 2.003625, 00:34:28.631 "iops": 5286.418366710337, 00:34:28.631 "mibps": 660.8022958387921, 00:34:28.631 "io_failed": 0, 00:34:28.631 "io_timeout": 0, 00:34:28.631 "avg_latency_us": 3024.230643072939, 00:34:28.631 "min_latency_us": 663.1619047619048, 00:34:28.631 "max_latency_us": 11421.988571428572 00:34:28.631 } 00:34:28.631 ], 00:34:28.631 "core_count": 1 00:34:28.631 } 00:34:28.631 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:28.631 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:28.631 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:28.631 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:28.631 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:28.631 | select(.opcode=="crc32c") 00:34:28.631 | "\(.module_name) \(.executed)"' 00:34:28.631 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:28.631 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:28.631 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:28.631 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:28.631 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2258013 00:34:28.631 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2258013 ']' 00:34:28.631 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2258013 00:34:28.631 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:28.631 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:28.631 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2258013 00:34:28.631 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:28.631 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:28.631 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2258013' 00:34:28.631 killing process with pid 2258013 00:34:28.631 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2258013 00:34:28.631 Received shutdown signal, test time was about 2.000000 seconds 00:34:28.631 00:34:28.631 Latency(us) 00:34:28.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:28.631 =================================================================================================================== 00:34:28.631 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:28.631 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2258013 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2258669 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2258669 /var/tmp/bperf.sock 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2258669 ']' 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:28.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:28.891 [2024-10-06 11:29:26.305510] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:34:28.891 [2024-10-06 11:29:26.305559] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2258669 ] 00:34:28.891 [2024-10-06 11:29:26.360551] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.891 [2024-10-06 11:29:26.401128] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:28.891 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:29.151 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:29.151 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:29.410 nvme0n1 00:34:29.410 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:29.410 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:29.670 Running I/O for 2 seconds... 00:34:31.545 27078.00 IOPS, 105.77 MiB/s 27163.00 IOPS, 106.11 MiB/s 00:34:31.545 Latency(us) 00:34:31.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:31.545 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:31.545 nvme0n1 : 2.01 27145.82 106.04 0.00 0.00 4706.14 3479.65 11609.23 00:34:31.545 =================================================================================================================== 00:34:31.545 Total : 27145.82 106.04 0.00 0.00 4706.14 3479.65 11609.23 00:34:31.545 { 00:34:31.545 "results": [ 00:34:31.545 { 00:34:31.545 "job": "nvme0n1", 00:34:31.545 "core_mask": "0x2", 00:34:31.545 "workload": "randwrite", 00:34:31.545 "status": "finished", 00:34:31.545 "queue_depth": 128, 00:34:31.545 "io_size": 4096, 00:34:31.545 "runtime": 2.00716, 00:34:31.545 "iops": 27145.81797166145, 00:34:31.545 "mibps": 106.03835145180254, 00:34:31.545 "io_failed": 0, 00:34:31.545 "io_timeout": 0, 00:34:31.545 "avg_latency_us": 4706.14457830146, 00:34:31.545 "min_latency_us": 3479.649523809524, 00:34:31.545 "max_latency_us": 11609.234285714285 00:34:31.545 } 00:34:31.545 ], 00:34:31.545 "core_count": 1 00:34:31.545 } 00:34:31.545 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:31.545 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:31.545 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:31.545 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:31.545 | select(.opcode=="crc32c") 00:34:31.545 | "\(.module_name) \(.executed)"' 00:34:31.545 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:31.806 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:31.806 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:31.806 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:31.806 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:31.806 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2258669 00:34:31.806 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2258669 ']' 00:34:31.806 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2258669 00:34:31.806 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:31.806 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:31.806 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2258669 00:34:31.806 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:31.806 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:31.806 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2258669' 00:34:31.806 killing process with pid 2258669 00:34:31.806 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2258669 00:34:31.806 Received shutdown signal, test time was about 2.000000 seconds 00:34:31.806 00:34:31.806 Latency(us) 00:34:31.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:31.806 =================================================================================================================== 00:34:31.806 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:31.806 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2258669 00:34:32.066 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:34:32.066 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:32.066 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:32.066 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:32.066 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:32.066 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:32.066 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:32.066 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2259129 00:34:32.066 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2259129 /var/tmp/bperf.sock 00:34:32.066 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:32.066 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2259129 ']' 00:34:32.066 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:32.066 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:32.066 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:32.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:32.066 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:32.066 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:32.066 [2024-10-06 11:29:29.538642] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:34:32.066 [2024-10-06 11:29:29.538689] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2259129 ] 00:34:32.066 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:32.066 Zero copy mechanism will not be used. 00:34:32.066 [2024-10-06 11:29:29.593992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.066 [2024-10-06 11:29:29.634320] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:32.325 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:32.325 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:32.325 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:32.325 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:32.325 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:32.584 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:32.585 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:32.844 nvme0n1 00:34:32.844 11:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:32.844 11:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:33.103 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:33.103 Zero copy mechanism will not be used. 00:34:33.103 Running I/O for 2 seconds... 00:34:35.128 5655.00 IOPS, 706.88 MiB/s 5852.00 IOPS, 731.50 MiB/s 00:34:35.128 Latency(us) 00:34:35.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.128 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:35.128 nvme0n1 : 2.01 5846.36 730.80 0.00 0.00 2731.50 1841.25 15478.98 00:34:35.128 =================================================================================================================== 00:34:35.128 Total : 5846.36 730.80 0.00 0.00 2731.50 1841.25 15478.98 00:34:35.128 { 00:34:35.128 "results": [ 00:34:35.128 { 00:34:35.128 "job": "nvme0n1", 00:34:35.128 "core_mask": "0x2", 00:34:35.128 "workload": "randwrite", 00:34:35.128 "status": "finished", 00:34:35.128 "queue_depth": 16, 00:34:35.128 "io_size": 131072, 00:34:35.128 "runtime": 2.005179, 00:34:35.128 "iops": 5846.360848582595, 00:34:35.128 "mibps": 730.7951060728244, 00:34:35.128 "io_failed": 0, 00:34:35.128 "io_timeout": 0, 00:34:35.128 "avg_latency_us": 2731.4982353777473, 00:34:35.128 "min_latency_us": 1841.249523809524, 00:34:35.128 "max_latency_us": 15478.979047619048 00:34:35.128 } 00:34:35.128 ], 00:34:35.128 "core_count": 1 00:34:35.128 } 00:34:35.128 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:35.129 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:35.129 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:35.129 | select(.opcode=="crc32c") 00:34:35.129 | "\(.module_name) \(.executed)"' 00:34:35.129 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:35.129 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:35.129 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:35.129 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:35.129 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:35.129 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:35.129 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2259129 00:34:35.129 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2259129 ']' 00:34:35.129 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2259129 00:34:35.129 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:35.129 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:35.129 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2259129 00:34:35.409 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:35.409 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:35.409 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2259129' 00:34:35.409 killing process with pid 2259129 00:34:35.409 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2259129 00:34:35.409 Received shutdown signal, test time was about 2.000000 seconds 00:34:35.409 00:34:35.409 Latency(us) 00:34:35.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.409 =================================================================================================================== 00:34:35.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:35.409 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2259129 00:34:35.409 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2257524 00:34:35.409 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2257524 ']' 00:34:35.409 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2257524 00:34:35.409 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:35.409 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:35.409 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2257524 00:34:35.409 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:35.409 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:35.409 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2257524' 00:34:35.409 killing process with pid 2257524 00:34:35.409 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2257524 00:34:35.409 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2257524 00:34:35.668 00:34:35.668 real 0m13.776s 00:34:35.668 user 0m26.511s 00:34:35.668 sys 0m4.324s 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:35.668 ************************************ 00:34:35.668 END TEST nvmf_digest_clean 00:34:35.668 ************************************ 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:35.668 ************************************ 00:34:35.668 START TEST nvmf_digest_error 00:34:35.668 ************************************ 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=2259791 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 2259791 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2259791 ']' 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:35.668 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:35.925 [2024-10-06 11:29:33.264845] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:34:35.925 [2024-10-06 11:29:33.264885] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:35.925 [2024-10-06 11:29:33.317026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.925 [2024-10-06 11:29:33.355342] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:35.926 [2024-10-06 11:29:33.355382] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:35.926 [2024-10-06 11:29:33.355389] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:35.926 [2024-10-06 11:29:33.355395] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:35.926 [2024-10-06 11:29:33.355400] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:35.926 [2024-10-06 11:29:33.355937] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.926 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:35.926 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:34:35.926 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:35.926 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:35.926 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:35.926 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:35.926 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:35.926 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.926 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:35.926 [2024-10-06 11:29:33.452449] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:35.926 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.926 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:34:35.926 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:34:35.926 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.926 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:36.185 null0 00:34:36.185 [2024-10-06 11:29:33.536756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:36.185 [2024-10-06 11:29:33.560960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:36.185 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.185 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:34:36.185 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:36.185 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:36.185 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:36.185 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:36.185 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2259858 00:34:36.185 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2259858 /var/tmp/bperf.sock 00:34:36.185 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:36.185 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2259858 ']' 00:34:36.185 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:36.185 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:36.185 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:36.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:36.185 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:36.185 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:36.185 [2024-10-06 11:29:33.613252] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:34:36.185 [2024-10-06 11:29:33.613292] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2259858 ] 00:34:36.185 [2024-10-06 11:29:33.668460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:36.185 [2024-10-06 11:29:33.708794] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:36.445 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:36.445 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:34:36.445 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:36.445 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:36.445 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:36.445 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.445 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:36.445 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.445 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:36.445 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:36.704 nvme0n1 00:34:36.705 11:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:36.705 11:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.705 11:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:36.705 11:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.705 11:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:36.705 11:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:36.965 Running I/O for 2 seconds... 00:34:36.965 [2024-10-06 11:29:34.353645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.965 [2024-10-06 11:29:34.353680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.965 [2024-10-06 11:29:34.353691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.965 [2024-10-06 11:29:34.364711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.965 [2024-10-06 11:29:34.364736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.965 [2024-10-06 11:29:34.364745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.965 [2024-10-06 11:29:34.373119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.965 [2024-10-06 11:29:34.373141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.965 [2024-10-06 11:29:34.373150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.965 [2024-10-06 11:29:34.384096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.965 [2024-10-06 11:29:34.384118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.965 [2024-10-06 11:29:34.384127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.965 [2024-10-06 11:29:34.391762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.965 [2024-10-06 11:29:34.391783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.965 [2024-10-06 11:29:34.391791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.965 [2024-10-06 11:29:34.402333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.965 [2024-10-06 11:29:34.402354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.965 [2024-10-06 11:29:34.402363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.965 [2024-10-06 11:29:34.411585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.965 [2024-10-06 11:29:34.411604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.965 [2024-10-06 11:29:34.411612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.965 [2024-10-06 11:29:34.419868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.965 [2024-10-06 11:29:34.419889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.965 [2024-10-06 11:29:34.419897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.965 [2024-10-06 11:29:34.430349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.965 [2024-10-06 11:29:34.430369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.965 [2024-10-06 11:29:34.430377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.965 [2024-10-06 11:29:34.440362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.965 [2024-10-06 11:29:34.440382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.965 [2024-10-06 11:29:34.440389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.965 [2024-10-06 11:29:34.448932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.965 [2024-10-06 11:29:34.448952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.965 [2024-10-06 11:29:34.448960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.965 [2024-10-06 11:29:34.460287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.965 [2024-10-06 11:29:34.460307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.965 [2024-10-06 11:29:34.460315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.965 [2024-10-06 11:29:34.468839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.965 [2024-10-06 11:29:34.468859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.965 [2024-10-06 11:29:34.468867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.966 [2024-10-06 11:29:34.478894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.966 [2024-10-06 11:29:34.478914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.966 [2024-10-06 11:29:34.478925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.966 [2024-10-06 11:29:34.487358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.966 [2024-10-06 11:29:34.487378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.966 [2024-10-06 11:29:34.487387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.966 [2024-10-06 11:29:34.497291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.966 [2024-10-06 11:29:34.497311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.966 [2024-10-06 11:29:34.497319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.966 [2024-10-06 11:29:34.507069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.966 [2024-10-06 11:29:34.507089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.966 [2024-10-06 11:29:34.507098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.966 [2024-10-06 11:29:34.516112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.966 [2024-10-06 11:29:34.516132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.966 [2024-10-06 11:29:34.516140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.966 [2024-10-06 11:29:34.525358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.966 [2024-10-06 11:29:34.525379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.966 [2024-10-06 11:29:34.525386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.966 [2024-10-06 11:29:34.534133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:36.966 [2024-10-06 11:29:34.534154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.966 [2024-10-06 11:29:34.534162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.225 [2024-10-06 11:29:34.544565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.225 [2024-10-06 11:29:34.544587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.225 [2024-10-06 11:29:34.544595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.225 [2024-10-06 11:29:34.553537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.225 [2024-10-06 11:29:34.553557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.225 [2024-10-06 11:29:34.553565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.225 [2024-10-06 11:29:34.562464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.225 [2024-10-06 11:29:34.562484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.225 [2024-10-06 11:29:34.562492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.225 [2024-10-06 11:29:34.571288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.225 [2024-10-06 11:29:34.571307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.225 [2024-10-06 11:29:34.571315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.225 [2024-10-06 11:29:34.581051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.225 [2024-10-06 11:29:34.581077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.225 [2024-10-06 11:29:34.581085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.225 [2024-10-06 11:29:34.590641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.225 [2024-10-06 11:29:34.590661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.225 [2024-10-06 11:29:34.590669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.225 [2024-10-06 11:29:34.600120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.225 [2024-10-06 11:29:34.600139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.225 [2024-10-06 11:29:34.600147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.225 [2024-10-06 11:29:34.609617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.225 [2024-10-06 11:29:34.609637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.225 [2024-10-06 11:29:34.609645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.225 [2024-10-06 11:29:34.619119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.225 [2024-10-06 11:29:34.619139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.225 [2024-10-06 11:29:34.619147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.225 [2024-10-06 11:29:34.629355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.225 [2024-10-06 11:29:34.629375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.225 [2024-10-06 11:29:34.629383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.225 [2024-10-06 11:29:34.638166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.225 [2024-10-06 11:29:34.638185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.225 [2024-10-06 11:29:34.638196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.225 [2024-10-06 11:29:34.647661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.225 [2024-10-06 11:29:34.647681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.225 [2024-10-06 11:29:34.647689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.225 [2024-10-06 11:29:34.656636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.225 [2024-10-06 11:29:34.656656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.225 [2024-10-06 11:29:34.656664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.225 [2024-10-06 11:29:34.665198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.225 [2024-10-06 11:29:34.665218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.225 [2024-10-06 11:29:34.665226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.225 [2024-10-06 11:29:34.675687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.225 [2024-10-06 11:29:34.675706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.226 [2024-10-06 11:29:34.675714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.226 [2024-10-06 11:29:34.684231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.226 [2024-10-06 11:29:34.684251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.226 [2024-10-06 11:29:34.684259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.226 [2024-10-06 11:29:34.694287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.226 [2024-10-06 11:29:34.694308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.226 [2024-10-06 11:29:34.694316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.226 [2024-10-06 11:29:34.703919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.226 [2024-10-06 11:29:34.703940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.226 [2024-10-06 11:29:34.703949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.226 [2024-10-06 11:29:34.712315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.226 [2024-10-06 11:29:34.712335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.226 [2024-10-06 11:29:34.712343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.226 [2024-10-06 11:29:34.721971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.226 [2024-10-06 11:29:34.721994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.226 [2024-10-06 11:29:34.722003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.226 [2024-10-06 11:29:34.731438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.226 [2024-10-06 11:29:34.731458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.226 [2024-10-06 11:29:34.731466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.226 [2024-10-06 11:29:34.740235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.226 [2024-10-06 11:29:34.740255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.226 [2024-10-06 11:29:34.740262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.226 [2024-10-06 11:29:34.749747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.226 [2024-10-06 11:29:34.749766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.226 [2024-10-06 11:29:34.749775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.226 [2024-10-06 11:29:34.759776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.226 [2024-10-06 11:29:34.759795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.226 [2024-10-06 11:29:34.759803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.226 [2024-10-06 11:29:34.769558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.226 [2024-10-06 11:29:34.769577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.226 [2024-10-06 11:29:34.769585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.226 [2024-10-06 11:29:34.777415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.226 [2024-10-06 11:29:34.777435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.226 [2024-10-06 11:29:34.777443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.226 [2024-10-06 11:29:34.787261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.226 [2024-10-06 11:29:34.787280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.226 [2024-10-06 11:29:34.787288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.226 [2024-10-06 11:29:34.798003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.226 [2024-10-06 11:29:34.798024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.226 [2024-10-06 11:29:34.798033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.511 [2024-10-06 11:29:34.806567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.511 [2024-10-06 11:29:34.806586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.511 [2024-10-06 11:29:34.806594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.511 [2024-10-06 11:29:34.816070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.511 [2024-10-06 11:29:34.816089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.511 [2024-10-06 11:29:34.816097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.511 [2024-10-06 11:29:34.825856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.511 [2024-10-06 11:29:34.825876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.511 [2024-10-06 11:29:34.825884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.511 [2024-10-06 11:29:34.834769] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.511 [2024-10-06 11:29:34.834789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.511 [2024-10-06 11:29:34.834798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.511 [2024-10-06 11:29:34.844665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.511 [2024-10-06 11:29:34.844684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.511 [2024-10-06 11:29:34.844692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.511 [2024-10-06 11:29:34.854098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.511 [2024-10-06 11:29:34.854117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.511 [2024-10-06 11:29:34.854126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.511 [2024-10-06 11:29:34.863299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.511 [2024-10-06 11:29:34.863318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.511 [2024-10-06 11:29:34.863327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.511 [2024-10-06 11:29:34.872946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:34.872966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:34.872974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:34.881898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:34.881918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:34.881929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:34.891338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:34.891357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:34.891365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:34.900772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:34.900792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:34.900800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:34.909734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:34.909753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:34.909760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:34.919821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:34.919840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:34.919849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:34.927844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:34.927864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:34.927872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:34.937649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:34.937669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:34.937677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:34.947621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:34.947640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:34.947648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:34.955918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:34.955938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:34.955946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:34.966274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:34.966297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:34.966305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:34.975320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:34.975339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:34.975347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:34.984514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:34.984533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:34.984541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:34.993686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:34.993705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:34.993713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:35.002024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:35.002043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:35.002051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:35.012777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:35.012797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:35.012805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:35.022045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:35.022068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:35.022076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:35.031154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:35.031173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:35.031181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:35.041365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:35.041384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:35.041395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:35.050288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:35.050308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:35.050316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:35.059900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:35.059919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:35.059927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:35.068422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:35.068441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:35.068450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.512 [2024-10-06 11:29:35.078031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.512 [2024-10-06 11:29:35.078050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.512 [2024-10-06 11:29:35.078063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.772 [2024-10-06 11:29:35.088435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.772 [2024-10-06 11:29:35.088454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.772 [2024-10-06 11:29:35.088462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.772 [2024-10-06 11:29:35.096677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.772 [2024-10-06 11:29:35.096697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.772 [2024-10-06 11:29:35.096705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.772 [2024-10-06 11:29:35.106323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.772 [2024-10-06 11:29:35.106343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.772 [2024-10-06 11:29:35.106351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.772 [2024-10-06 11:29:35.115876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.772 [2024-10-06 11:29:35.115895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.772 [2024-10-06 11:29:35.115903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.772 [2024-10-06 11:29:35.125505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.772 [2024-10-06 11:29:35.125529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.125536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.135223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.135242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.135250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.143893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.143912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.143921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.152593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.152612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.152620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.162393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.162412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.162420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.171841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.171860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.171868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.180908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.180927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.180935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.191183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.191202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.191210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.199213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.199232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.199240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.209323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.209343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.209351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.218851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.218870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.218878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.227475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.227494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.227502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.237201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.237219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.237227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.246132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.246152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.246160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.255707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.255726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.255735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.265729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.265747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.265755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.273743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.273762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.273769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.283324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.283343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.283354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.293785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.293805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.293813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.302145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.302164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.302172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.311998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.312017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.312025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.320216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.320235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.320243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 [2024-10-06 11:29:35.330348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.330367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.330375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:37.773 26938.00 IOPS, 105.23 MiB/s [2024-10-06 11:29:35.339790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:37.773 [2024-10-06 11:29:35.339810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.773 [2024-10-06 11:29:35.339818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.032 [2024-10-06 11:29:35.351390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.032 [2024-10-06 11:29:35.351410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.032 [2024-10-06 11:29:35.351419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.032 [2024-10-06 11:29:35.359400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.032 [2024-10-06 11:29:35.359419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.032 [2024-10-06 11:29:35.359427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.032 [2024-10-06 11:29:35.370555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.032 [2024-10-06 11:29:35.370576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.032 [2024-10-06 11:29:35.370584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.032 [2024-10-06 11:29:35.379958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.032 [2024-10-06 11:29:35.379977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.032 [2024-10-06 11:29:35.379985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.032 [2024-10-06 11:29:35.388784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.032 [2024-10-06 11:29:35.388803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.032 [2024-10-06 11:29:35.388811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.032 [2024-10-06 11:29:35.397399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.032 [2024-10-06 11:29:35.397418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.032 [2024-10-06 11:29:35.397426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.032 [2024-10-06 11:29:35.407882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.032 [2024-10-06 11:29:35.407903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.032 [2024-10-06 11:29:35.407911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.032 [2024-10-06 11:29:35.416174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.032 [2024-10-06 11:29:35.416194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.032 [2024-10-06 11:29:35.416202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.032 [2024-10-06 11:29:35.426799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.032 [2024-10-06 11:29:35.426820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.032 [2024-10-06 11:29:35.426828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.032 [2024-10-06 11:29:35.435548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.435568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.435577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.033 [2024-10-06 11:29:35.444799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.444819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.444831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.033 [2024-10-06 11:29:35.454527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.454547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.454556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.033 [2024-10-06 11:29:35.462733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.462751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.462759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.033 [2024-10-06 11:29:35.473372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.473391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.473399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.033 [2024-10-06 11:29:35.483023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.483042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.483050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.033 [2024-10-06 11:29:35.491847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.491866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.491875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.033 [2024-10-06 11:29:35.501740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.501759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.501768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.033 [2024-10-06 11:29:35.511677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.511696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.511704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.033 [2024-10-06 11:29:35.520101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.520121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.520129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.033 [2024-10-06 11:29:35.529640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.529663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.529671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.033 [2024-10-06 11:29:35.539200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.539220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.539228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.033 [2024-10-06 11:29:35.547504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.547524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.547532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.033 [2024-10-06 11:29:35.557905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.557924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.557932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.033 [2024-10-06 11:29:35.567234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.567254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.567262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.033 [2024-10-06 11:29:35.576476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.576495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.576503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.033 [2024-10-06 11:29:35.584987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.585007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.585015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.033 [2024-10-06 11:29:35.594322] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.594341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.594349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.033 [2024-10-06 11:29:35.604445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.033 [2024-10-06 11:29:35.604464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.033 [2024-10-06 11:29:35.604473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.293 [2024-10-06 11:29:35.612923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.293 [2024-10-06 11:29:35.612942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.293 [2024-10-06 11:29:35.612950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.293 [2024-10-06 11:29:35.623578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.293 [2024-10-06 11:29:35.623598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.293 [2024-10-06 11:29:35.623606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.293 [2024-10-06 11:29:35.632065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.293 [2024-10-06 11:29:35.632084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.293 [2024-10-06 11:29:35.632093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.293 [2024-10-06 11:29:35.640900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.293 [2024-10-06 11:29:35.640920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.293 [2024-10-06 11:29:35.640928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.293 [2024-10-06 11:29:35.651158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.293 [2024-10-06 11:29:35.651178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.293 [2024-10-06 11:29:35.651186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.293 [2024-10-06 11:29:35.660756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.293 [2024-10-06 11:29:35.660775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.293 [2024-10-06 11:29:35.660783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.293 [2024-10-06 11:29:35.669886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.293 [2024-10-06 11:29:35.669906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.293 [2024-10-06 11:29:35.669914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.293 [2024-10-06 11:29:35.678668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.293 [2024-10-06 11:29:35.678688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.293 [2024-10-06 11:29:35.678696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.293 [2024-10-06 11:29:35.689383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.293 [2024-10-06 11:29:35.689402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.293 [2024-10-06 11:29:35.689413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.293 [2024-10-06 11:29:35.697276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.293 [2024-10-06 11:29:35.697295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.293 [2024-10-06 11:29:35.697302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.293 [2024-10-06 11:29:35.707288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.293 [2024-10-06 11:29:35.707307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.293 [2024-10-06 11:29:35.707315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.293 [2024-10-06 11:29:35.717435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.293 [2024-10-06 11:29:35.717455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.293 [2024-10-06 11:29:35.717463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.293 [2024-10-06 11:29:35.725576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.293 [2024-10-06 11:29:35.725597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.293 [2024-10-06 11:29:35.725605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.293 [2024-10-06 11:29:35.735300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.293 [2024-10-06 11:29:35.735320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.293 [2024-10-06 11:29:35.735327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.293 [2024-10-06 11:29:35.745221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.293 [2024-10-06 11:29:35.745241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.293 [2024-10-06 11:29:35.745249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.293 [2024-10-06 11:29:35.753125] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.293 [2024-10-06 11:29:35.753145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.293 [2024-10-06 11:29:35.753154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.293 [2024-10-06 11:29:35.763652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.293 [2024-10-06 11:29:35.763671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.293 [2024-10-06 11:29:35.763679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.293 [2024-10-06 11:29:35.774295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.293 [2024-10-06 11:29:35.774320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.294 [2024-10-06 11:29:35.774328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.294 [2024-10-06 11:29:35.784642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.294 [2024-10-06 11:29:35.784662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.294 [2024-10-06 11:29:35.784670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.294 [2024-10-06 11:29:35.793644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.294 [2024-10-06 11:29:35.793664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.294 [2024-10-06 11:29:35.793672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.294 [2024-10-06 11:29:35.803640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.294 [2024-10-06 11:29:35.803659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.294 [2024-10-06 11:29:35.803667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.294 [2024-10-06 11:29:35.812503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.294 [2024-10-06 11:29:35.812523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.294 [2024-10-06 11:29:35.812531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.294 [2024-10-06 11:29:35.821878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.294 [2024-10-06 11:29:35.821898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.294 [2024-10-06 11:29:35.821907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.294 [2024-10-06 11:29:35.830696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.294 [2024-10-06 11:29:35.830715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.294 [2024-10-06 11:29:35.830723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.294 [2024-10-06 11:29:35.840997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.294 [2024-10-06 11:29:35.841017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.294 [2024-10-06 11:29:35.841025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.294 [2024-10-06 11:29:35.851555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.294 [2024-10-06 11:29:35.851574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.294 [2024-10-06 11:29:35.851583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.294 [2024-10-06 11:29:35.860231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.294 [2024-10-06 11:29:35.860250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.294 [2024-10-06 11:29:35.860258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.554 [2024-10-06 11:29:35.871001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.554 [2024-10-06 11:29:35.871021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.554 [2024-10-06 11:29:35.871030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.554 [2024-10-06 11:29:35.880583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.554 [2024-10-06 11:29:35.880603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.554 [2024-10-06 11:29:35.880611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.554 [2024-10-06 11:29:35.890115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.554 [2024-10-06 11:29:35.890135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.554 [2024-10-06 11:29:35.890143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.554 [2024-10-06 11:29:35.898632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.554 [2024-10-06 11:29:35.898651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.554 [2024-10-06 11:29:35.898659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.554 [2024-10-06 11:29:35.909214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.554 [2024-10-06 11:29:35.909234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.554 [2024-10-06 11:29:35.909242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.554 [2024-10-06 11:29:35.918431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.554 [2024-10-06 11:29:35.918450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.554 [2024-10-06 11:29:35.918459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.554 [2024-10-06 11:29:35.927511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.554 [2024-10-06 11:29:35.927531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.554 [2024-10-06 11:29:35.927539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.554 [2024-10-06 11:29:35.936388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.554 [2024-10-06 11:29:35.936412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.554 [2024-10-06 11:29:35.936420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.554 [2024-10-06 11:29:35.945945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.554 [2024-10-06 11:29:35.945965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:35.945973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:35.955898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:35.955917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:35.955925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:35.965024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:35.965044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:35.965052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:35.973746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:35.973766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:35.973774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:35.982949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:35.982971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:35.982979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:35.993284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:35.993303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:35.993311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:36.001556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:36.001576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:36.001584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:36.010511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:36.010531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:36.010539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:36.020660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:36.020680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:36.020688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:36.029666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:36.029685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:36.029693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:36.038495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:36.038514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:36.038522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:36.048524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:36.048544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:36.048552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:36.058132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:36.058151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:36.058159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:36.065675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:36.065695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:36.065703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:36.076977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:36.076996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:36.077003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:36.086580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:36.086599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:36.086607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:36.096273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:36.096292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:36.096303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:36.105163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:36.105183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:36.105192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:36.115074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:36.115094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:36.115102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.555 [2024-10-06 11:29:36.123381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.555 [2024-10-06 11:29:36.123401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.555 [2024-10-06 11:29:36.123409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.815 [2024-10-06 11:29:36.133298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.815 [2024-10-06 11:29:36.133317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.133325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.142675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.142694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.142702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.152352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.152372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.152380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.160663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.160682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.160690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.170156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.170176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.170184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.180373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.180397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.180405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.189841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.189860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.189868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.198465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.198484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.198492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.208046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.208072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.208080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.217749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.217768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.217775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.226628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.226647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.226655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.235522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.235541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.235550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.245888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.245907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.245915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.254467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.254486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.254494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.264270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.264289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.264296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.274021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.274040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.274048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.283878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.283897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.283904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.291800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.291819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.291827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.302289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.302309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.302317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.312082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.312101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.312109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.320587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.320607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.320615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.329621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.329641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.329649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 [2024-10-06 11:29:36.340184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6dcd0) 00:34:38.816 [2024-10-06 11:29:36.340203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:38.816 [2024-10-06 11:29:36.340214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.816 27025.50 IOPS, 105.57 MiB/s 00:34:38.816 Latency(us) 00:34:38.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:38.816 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:38.816 nvme0n1 : 2.00 27045.75 105.65 0.00 0.00 4727.67 2153.33 13731.35 00:34:38.816 =================================================================================================================== 00:34:38.816 Total : 27045.75 105.65 0.00 0.00 4727.67 2153.33 13731.35 00:34:38.816 { 00:34:38.816 "results": [ 00:34:38.816 { 00:34:38.816 "job": "nvme0n1", 00:34:38.816 "core_mask": "0x2", 00:34:38.816 "workload": "randread", 00:34:38.816 "status": "finished", 00:34:38.816 "queue_depth": 128, 00:34:38.816 "io_size": 4096, 00:34:38.816 "runtime": 2.004012, 00:34:38.816 "iops": 27045.74623305649, 00:34:38.816 "mibps": 105.64744622287691, 00:34:38.816 "io_failed": 0, 00:34:38.816 "io_timeout": 0, 00:34:38.816 "avg_latency_us": 4727.67387777192, 00:34:38.816 "min_latency_us": 2153.325714285714, 00:34:38.817 "max_latency_us": 13731.352380952381 00:34:38.817 } 00:34:38.817 ], 00:34:38.817 "core_count": 1 00:34:38.817 } 00:34:38.817 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:38.817 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:38.817 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:38.817 | .driver_specific 00:34:38.817 | .nvme_error 00:34:38.817 | .status_code 00:34:38.817 | .command_transient_transport_error' 00:34:38.817 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:39.076 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 212 > 0 )) 00:34:39.076 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2259858 00:34:39.076 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2259858 ']' 00:34:39.076 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2259858 00:34:39.076 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:34:39.076 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:39.076 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2259858 00:34:39.076 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:39.076 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:39.076 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2259858' 00:34:39.076 killing process with pid 2259858 00:34:39.077 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2259858 00:34:39.077 Received shutdown signal, test time was about 2.000000 seconds 00:34:39.077 00:34:39.077 Latency(us) 00:34:39.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:39.077 =================================================================================================================== 00:34:39.077 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:39.077 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2259858 00:34:39.337 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:34:39.337 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:39.337 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:39.337 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:39.337 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:39.337 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2260324 00:34:39.337 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2260324 /var/tmp/bperf.sock 00:34:39.337 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2260324 ']' 00:34:39.337 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:39.337 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:39.337 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:39.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:39.337 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:39.337 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:39.337 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:39.337 [2024-10-06 11:29:36.834846] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:34:39.337 [2024-10-06 11:29:36.834894] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2260324 ] 00:34:39.337 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:39.337 Zero copy mechanism will not be used. 00:34:39.337 [2024-10-06 11:29:36.890049] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:39.596 [2024-10-06 11:29:36.928292] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:39.596 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:39.596 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:34:39.596 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:39.596 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:39.856 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:39.856 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.856 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:39.856 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.856 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:39.856 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:40.115 nvme0n1 00:34:40.115 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:40.115 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.115 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:40.115 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.115 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:40.115 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:40.375 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:40.375 Zero copy mechanism will not be used. 00:34:40.375 Running I/O for 2 seconds... 00:34:40.375 [2024-10-06 11:29:37.743362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.375 [2024-10-06 11:29:37.743393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.375 [2024-10-06 11:29:37.743403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.375 [2024-10-06 11:29:37.753256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.375 [2024-10-06 11:29:37.753280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.375 [2024-10-06 11:29:37.753290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.375 [2024-10-06 11:29:37.762593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.375 [2024-10-06 11:29:37.762615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.375 [2024-10-06 11:29:37.762624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.375 [2024-10-06 11:29:37.772245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.375 [2024-10-06 11:29:37.772267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.375 [2024-10-06 11:29:37.772276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.375 [2024-10-06 11:29:37.781899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.375 [2024-10-06 11:29:37.781921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.375 [2024-10-06 11:29:37.781930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.375 [2024-10-06 11:29:37.791343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.375 [2024-10-06 11:29:37.791365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.375 [2024-10-06 11:29:37.791373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.375 [2024-10-06 11:29:37.800033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.375 [2024-10-06 11:29:37.800055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.375 [2024-10-06 11:29:37.800070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.375 [2024-10-06 11:29:37.809241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.375 [2024-10-06 11:29:37.809262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.375 [2024-10-06 11:29:37.809271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.375 [2024-10-06 11:29:37.818862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.375 [2024-10-06 11:29:37.818883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.375 [2024-10-06 11:29:37.818891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.375 [2024-10-06 11:29:37.828300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.375 [2024-10-06 11:29:37.828322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.375 [2024-10-06 11:29:37.828330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.375 [2024-10-06 11:29:37.838403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.375 [2024-10-06 11:29:37.838424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.375 [2024-10-06 11:29:37.838432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.375 [2024-10-06 11:29:37.849144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.375 [2024-10-06 11:29:37.849166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.375 [2024-10-06 11:29:37.849174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.375 [2024-10-06 11:29:37.859064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.375 [2024-10-06 11:29:37.859084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.375 [2024-10-06 11:29:37.859093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.375 [2024-10-06 11:29:37.869476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.375 [2024-10-06 11:29:37.869495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.375 [2024-10-06 11:29:37.869503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.375 [2024-10-06 11:29:37.878423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.375 [2024-10-06 11:29:37.878443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.375 [2024-10-06 11:29:37.878451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.375 [2024-10-06 11:29:37.886713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.375 [2024-10-06 11:29:37.886733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.375 [2024-10-06 11:29:37.886744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.375 [2024-10-06 11:29:37.894638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.375 [2024-10-06 11:29:37.894659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.375 [2024-10-06 11:29:37.894667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.375 [2024-10-06 11:29:37.902315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.376 [2024-10-06 11:29:37.902335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.376 [2024-10-06 11:29:37.902343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.376 [2024-10-06 11:29:37.909434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.376 [2024-10-06 11:29:37.909454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.376 [2024-10-06 11:29:37.909462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.376 [2024-10-06 11:29:37.916184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.376 [2024-10-06 11:29:37.916203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.376 [2024-10-06 11:29:37.916211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.376 [2024-10-06 11:29:37.922664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.376 [2024-10-06 11:29:37.922684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.376 [2024-10-06 11:29:37.922692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.376 [2024-10-06 11:29:37.929882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.376 [2024-10-06 11:29:37.929902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.376 [2024-10-06 11:29:37.929910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.376 [2024-10-06 11:29:37.938077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.376 [2024-10-06 11:29:37.938097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.376 [2024-10-06 11:29:37.938105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.376 [2024-10-06 11:29:37.946397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.376 [2024-10-06 11:29:37.946417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.376 [2024-10-06 11:29:37.946425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.636 [2024-10-06 11:29:37.954786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.636 [2024-10-06 11:29:37.954811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.636 [2024-10-06 11:29:37.954819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.636 [2024-10-06 11:29:37.963582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.636 [2024-10-06 11:29:37.963602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.636 [2024-10-06 11:29:37.963610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.636 [2024-10-06 11:29:37.972099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.636 [2024-10-06 11:29:37.972135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.636 [2024-10-06 11:29:37.972143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.636 [2024-10-06 11:29:37.981499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.636 [2024-10-06 11:29:37.981520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.636 [2024-10-06 11:29:37.981528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.636 [2024-10-06 11:29:37.990593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.636 [2024-10-06 11:29:37.990613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.636 [2024-10-06 11:29:37.990621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.636 [2024-10-06 11:29:37.998932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.636 [2024-10-06 11:29:37.998952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.636 [2024-10-06 11:29:37.998960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.636 [2024-10-06 11:29:38.007739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.636 [2024-10-06 11:29:38.007760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.636 [2024-10-06 11:29:38.007768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.636 [2024-10-06 11:29:38.015544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.636 [2024-10-06 11:29:38.015565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.636 [2024-10-06 11:29:38.015572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.636 [2024-10-06 11:29:38.024099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.636 [2024-10-06 11:29:38.024119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.636 [2024-10-06 11:29:38.024131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.636 [2024-10-06 11:29:38.032032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.636 [2024-10-06 11:29:38.032053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.636 [2024-10-06 11:29:38.032066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.636 [2024-10-06 11:29:38.039833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.039854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.039862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.046871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.046892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.046899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.054579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.054600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.054608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.062419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.062440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.062448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.069593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.069613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.069621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.076394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.076413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.076421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.082789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.082809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.082817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.089075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.089098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.089105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.095172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.095192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.095199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.101956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.101976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.101984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.107833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.107852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.107860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.113504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.113523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.113531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.119238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.119258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.119265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.125010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.125030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.125037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.130964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.130984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.130991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.136636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.136655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.136663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.142432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.142452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.142459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.147969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.147991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.147999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.153863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.153883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.153891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.159486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.159507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.159514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.164679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.164700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.164708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.170094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.170115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.170122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.175397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.175417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.175424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.180725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.180746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.180754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.185960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.185980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.185991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.191214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.637 [2024-10-06 11:29:38.191234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.637 [2024-10-06 11:29:38.191242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.637 [2024-10-06 11:29:38.196478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.638 [2024-10-06 11:29:38.196498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.638 [2024-10-06 11:29:38.196506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.638 [2024-10-06 11:29:38.201804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.638 [2024-10-06 11:29:38.201824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.638 [2024-10-06 11:29:38.201832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.638 [2024-10-06 11:29:38.207211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.638 [2024-10-06 11:29:38.207232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.638 [2024-10-06 11:29:38.207240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.905 [2024-10-06 11:29:38.212757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.905 [2024-10-06 11:29:38.212778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.905 [2024-10-06 11:29:38.212786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.218314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.218334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.218342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.223375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.223396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.223403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.228676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.228696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.228703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.233882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.233906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.233914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.239152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.239172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.239180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.244343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.244363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.244371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.249589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.249610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.249617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.254943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.254964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.254972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.260275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.260296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.260304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.265805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.265825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.265833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.271272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.271292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.271300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.276727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.276748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.276756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.282099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.282119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.282127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.287427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.287448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.287455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.292954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.292975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.292982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.298382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.298402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.298410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.303972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.303992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.304000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.309554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.309574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.309581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.314973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.314993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.315001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.906 [2024-10-06 11:29:38.320430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.906 [2024-10-06 11:29:38.320450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.906 [2024-10-06 11:29:38.320458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.907 [2024-10-06 11:29:38.325799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.907 [2024-10-06 11:29:38.325819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.907 [2024-10-06 11:29:38.325830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.907 [2024-10-06 11:29:38.331216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.907 [2024-10-06 11:29:38.331236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.907 [2024-10-06 11:29:38.331244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.907 [2024-10-06 11:29:38.336766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.907 [2024-10-06 11:29:38.336786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.907 [2024-10-06 11:29:38.336794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.907 [2024-10-06 11:29:38.342297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.907 [2024-10-06 11:29:38.342318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.907 [2024-10-06 11:29:38.342326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.907 [2024-10-06 11:29:38.347904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.907 [2024-10-06 11:29:38.347924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.907 [2024-10-06 11:29:38.347932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.907 [2024-10-06 11:29:38.353428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.907 [2024-10-06 11:29:38.353448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.907 [2024-10-06 11:29:38.353455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.907 [2024-10-06 11:29:38.358832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.907 [2024-10-06 11:29:38.358852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.907 [2024-10-06 11:29:38.358859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.907 [2024-10-06 11:29:38.364365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.907 [2024-10-06 11:29:38.364385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.907 [2024-10-06 11:29:38.364393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.907 [2024-10-06 11:29:38.369913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.907 [2024-10-06 11:29:38.369932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.907 [2024-10-06 11:29:38.369939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.907 [2024-10-06 11:29:38.375385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.907 [2024-10-06 11:29:38.375406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.907 [2024-10-06 11:29:38.375414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.907 [2024-10-06 11:29:38.380848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.907 [2024-10-06 11:29:38.380868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.907 [2024-10-06 11:29:38.380876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.907 [2024-10-06 11:29:38.386262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.907 [2024-10-06 11:29:38.386283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.907 [2024-10-06 11:29:38.386290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.907 [2024-10-06 11:29:38.391725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.907 [2024-10-06 11:29:38.391746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.907 [2024-10-06 11:29:38.391753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.907 [2024-10-06 11:29:38.397302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.907 [2024-10-06 11:29:38.397323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.907 [2024-10-06 11:29:38.397331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.907 [2024-10-06 11:29:38.402746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.907 [2024-10-06 11:29:38.402767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.907 [2024-10-06 11:29:38.402774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.907 [2024-10-06 11:29:38.408155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.907 [2024-10-06 11:29:38.408176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.907 [2024-10-06 11:29:38.408183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.908 [2024-10-06 11:29:38.413479] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.908 [2024-10-06 11:29:38.413499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.908 [2024-10-06 11:29:38.413507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.908 [2024-10-06 11:29:38.418833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.908 [2024-10-06 11:29:38.418853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.908 [2024-10-06 11:29:38.418864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.908 [2024-10-06 11:29:38.424471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.908 [2024-10-06 11:29:38.424492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.908 [2024-10-06 11:29:38.424500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.908 [2024-10-06 11:29:38.429906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.908 [2024-10-06 11:29:38.429927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.908 [2024-10-06 11:29:38.429935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.908 [2024-10-06 11:29:38.435453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.908 [2024-10-06 11:29:38.435473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.908 [2024-10-06 11:29:38.435481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.908 [2024-10-06 11:29:38.440998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.908 [2024-10-06 11:29:38.441018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.908 [2024-10-06 11:29:38.441026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.908 [2024-10-06 11:29:38.446320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.908 [2024-10-06 11:29:38.446339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.908 [2024-10-06 11:29:38.446347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.908 [2024-10-06 11:29:38.451636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.908 [2024-10-06 11:29:38.451656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.908 [2024-10-06 11:29:38.451664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.908 [2024-10-06 11:29:38.456926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.908 [2024-10-06 11:29:38.456946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.908 [2024-10-06 11:29:38.456954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.908 [2024-10-06 11:29:38.462268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.908 [2024-10-06 11:29:38.462288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.908 [2024-10-06 11:29:38.462296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.908 [2024-10-06 11:29:38.467783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.908 [2024-10-06 11:29:38.467808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.908 [2024-10-06 11:29:38.467816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.908 [2024-10-06 11:29:38.473266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:40.908 [2024-10-06 11:29:38.473287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.908 [2024-10-06 11:29:38.473295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.170 [2024-10-06 11:29:38.478915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.170 [2024-10-06 11:29:38.478934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.170 [2024-10-06 11:29:38.478942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.170 [2024-10-06 11:29:38.484494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.170 [2024-10-06 11:29:38.484514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.170 [2024-10-06 11:29:38.484521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.170 [2024-10-06 11:29:38.490173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.170 [2024-10-06 11:29:38.490194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.170 [2024-10-06 11:29:38.490202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.170 [2024-10-06 11:29:38.496124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.170 [2024-10-06 11:29:38.496145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.170 [2024-10-06 11:29:38.496153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.170 [2024-10-06 11:29:38.501991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.170 [2024-10-06 11:29:38.502012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.170 [2024-10-06 11:29:38.502019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.170 [2024-10-06 11:29:38.507639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.170 [2024-10-06 11:29:38.507660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.507668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.513166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.513187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.513196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.518906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.518928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.518936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.524571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.524591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.524599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.530235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.530256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.530274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.535827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.535848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.535855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.541411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.541432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.541440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.547046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.547072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.547080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.550880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.550901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.550908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.555361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.555382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.555389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.560693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.560713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.560724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.566146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.566167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.566175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.571528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.571548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.571556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.576958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.576978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.576985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.582246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.582267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.582275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.587651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.587672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.587679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.593228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.593248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.593256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.598800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.598821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.598829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.604458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.604480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.604488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.610022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.610047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.610055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.615551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.615572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.615580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.621024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.621045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.621053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.626445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.626467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.626476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.631876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.631898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.631906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.637320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.637342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.637349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.642867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.642888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.171 [2024-10-06 11:29:38.642896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.171 [2024-10-06 11:29:38.648656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.171 [2024-10-06 11:29:38.648677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.172 [2024-10-06 11:29:38.648684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.172 [2024-10-06 11:29:38.654394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.172 [2024-10-06 11:29:38.654414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.172 [2024-10-06 11:29:38.654422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.172 [2024-10-06 11:29:38.659901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.172 [2024-10-06 11:29:38.659922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.172 [2024-10-06 11:29:38.659930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.172 [2024-10-06 11:29:38.665459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.172 [2024-10-06 11:29:38.665480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.172 [2024-10-06 11:29:38.665487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.172 [2024-10-06 11:29:38.671040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.172 [2024-10-06 11:29:38.671068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.172 [2024-10-06 11:29:38.671076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.172 [2024-10-06 11:29:38.676692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.172 [2024-10-06 11:29:38.676712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.172 [2024-10-06 11:29:38.676719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.172 [2024-10-06 11:29:38.682392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.172 [2024-10-06 11:29:38.682412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.172 [2024-10-06 11:29:38.682419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.172 [2024-10-06 11:29:38.688046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.172 [2024-10-06 11:29:38.688073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.172 [2024-10-06 11:29:38.688081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.172 [2024-10-06 11:29:38.693671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.172 [2024-10-06 11:29:38.693692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.172 [2024-10-06 11:29:38.693700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.172 [2024-10-06 11:29:38.699366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.172 [2024-10-06 11:29:38.699387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.172 [2024-10-06 11:29:38.699395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.172 [2024-10-06 11:29:38.705074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.172 [2024-10-06 11:29:38.705094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.172 [2024-10-06 11:29:38.705105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.172 [2024-10-06 11:29:38.710658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.172 [2024-10-06 11:29:38.710678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.172 [2024-10-06 11:29:38.710686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.172 [2024-10-06 11:29:38.716205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.172 [2024-10-06 11:29:38.716226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.172 [2024-10-06 11:29:38.716234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.172 [2024-10-06 11:29:38.721724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.172 [2024-10-06 11:29:38.721746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.172 [2024-10-06 11:29:38.721754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.172 [2024-10-06 11:29:38.727334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.172 [2024-10-06 11:29:38.727355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.172 [2024-10-06 11:29:38.727363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.172 4901.00 IOPS, 612.62 MiB/s [2024-10-06 11:29:38.734218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.172 [2024-10-06 11:29:38.734240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.172 [2024-10-06 11:29:38.734248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.172 [2024-10-06 11:29:38.740203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.172 [2024-10-06 11:29:38.740225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.172 [2024-10-06 11:29:38.740232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.432 [2024-10-06 11:29:38.746238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.432 [2024-10-06 11:29:38.746259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.432 [2024-10-06 11:29:38.746267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.432 [2024-10-06 11:29:38.752173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.432 [2024-10-06 11:29:38.752194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.432 [2024-10-06 11:29:38.752201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.432 [2024-10-06 11:29:38.757913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.432 [2024-10-06 11:29:38.757934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.432 [2024-10-06 11:29:38.757942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.432 [2024-10-06 11:29:38.763572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.432 [2024-10-06 11:29:38.763594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.432 [2024-10-06 11:29:38.763601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.432 [2024-10-06 11:29:38.769225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.432 [2024-10-06 11:29:38.769245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.432 [2024-10-06 11:29:38.769253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.432 [2024-10-06 11:29:38.774887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.432 [2024-10-06 11:29:38.774907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.432 [2024-10-06 11:29:38.774915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.432 [2024-10-06 11:29:38.780659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.432 [2024-10-06 11:29:38.780681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.432 [2024-10-06 11:29:38.780689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.432 [2024-10-06 11:29:38.786336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.432 [2024-10-06 11:29:38.786356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.432 [2024-10-06 11:29:38.786364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.432 [2024-10-06 11:29:38.791959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.432 [2024-10-06 11:29:38.791980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.432 [2024-10-06 11:29:38.791989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.432 [2024-10-06 11:29:38.797647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.432 [2024-10-06 11:29:38.797668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.432 [2024-10-06 11:29:38.797675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.432 [2024-10-06 11:29:38.803268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.432 [2024-10-06 11:29:38.803290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.803301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.808828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.808849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.808857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.814288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.814309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.814318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.819862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.819883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.819891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.825406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.825426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.825434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.831083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.831105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.831113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.836764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.836785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.836793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.842239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.842259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.842268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.847734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.847755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.847765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.853230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.853257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.853265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.858811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.858831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.858839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.864360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.864381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.864389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.869842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.869863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.869871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.875323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.875344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.875354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.880803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.880824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.880832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.886299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.886320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.886328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.891710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.891731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.891738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.897309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.897330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.897338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.902666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.902687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.902695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.908046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.908074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.908082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.913569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.913590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.913598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.919145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.919166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.919174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.924803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.924824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.924831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.930237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.930258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.433 [2024-10-06 11:29:38.930266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.433 [2024-10-06 11:29:38.935603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.433 [2024-10-06 11:29:38.935624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.434 [2024-10-06 11:29:38.935632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.434 [2024-10-06 11:29:38.941001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.434 [2024-10-06 11:29:38.941022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.434 [2024-10-06 11:29:38.941031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.434 [2024-10-06 11:29:38.946477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.434 [2024-10-06 11:29:38.946497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.434 [2024-10-06 11:29:38.946508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.434 [2024-10-06 11:29:38.951963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.434 [2024-10-06 11:29:38.951985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.434 [2024-10-06 11:29:38.951992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.434 [2024-10-06 11:29:38.957445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.434 [2024-10-06 11:29:38.957467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.434 [2024-10-06 11:29:38.957474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.434 [2024-10-06 11:29:38.962814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.434 [2024-10-06 11:29:38.962834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.434 [2024-10-06 11:29:38.962842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.434 [2024-10-06 11:29:38.968232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.434 [2024-10-06 11:29:38.968252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.434 [2024-10-06 11:29:38.968260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.434 [2024-10-06 11:29:38.973644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.434 [2024-10-06 11:29:38.973664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.434 [2024-10-06 11:29:38.973671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.434 [2024-10-06 11:29:38.979008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.434 [2024-10-06 11:29:38.979029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.434 [2024-10-06 11:29:38.979037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.434 [2024-10-06 11:29:38.984453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.434 [2024-10-06 11:29:38.984473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.434 [2024-10-06 11:29:38.984480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.434 [2024-10-06 11:29:38.989941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.434 [2024-10-06 11:29:38.989962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.434 [2024-10-06 11:29:38.989970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.434 [2024-10-06 11:29:38.995378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.434 [2024-10-06 11:29:38.995403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.434 [2024-10-06 11:29:38.995411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.434 [2024-10-06 11:29:39.000832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.434 [2024-10-06 11:29:39.000868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.434 [2024-10-06 11:29:39.000876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.006321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.006342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.006350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.011784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.011805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.011813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.017289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.017310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.017318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.022877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.022897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.022904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.028448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.028468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.028476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.033896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.033916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.033924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.039291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.039311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.039319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.044702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.044723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.044731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.050238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.050259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.050266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.055762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.055782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.055790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.061265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.061285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.061293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.066640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.066660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.066667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.072076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.072096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.072104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.077571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.077592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.077600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.083206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.083225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.083233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.088709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.088732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.088740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.094117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.094137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.094144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.099581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.099602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.099609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.105067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.105087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.105095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.110508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.110528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.110536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.116137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.695 [2024-10-06 11:29:39.116157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.695 [2024-10-06 11:29:39.116164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.695 [2024-10-06 11:29:39.121666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.121687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.121695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.127065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.127085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.127093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.132498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.132519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.132527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.137926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.137947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.137954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.143433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.143453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.143460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.148954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.148975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.148982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.154503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.154524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.154531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.160065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.160085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.160092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.165498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.165519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.165526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.170864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.170884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.170892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.176473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.176493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.176501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.182071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.182091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.182102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.187760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.187780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.187787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.193326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.193346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.193354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.198831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.198850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.198858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.204386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.204406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.204414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.209943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.209964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.209971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.215434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.215455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.215463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.221288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.221308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.221316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.227364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.227384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.227392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.234660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.234684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.234691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.243743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.243765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.243773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.252679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.252699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.252707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.260765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.260786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.260794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.696 [2024-10-06 11:29:39.268056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.696 [2024-10-06 11:29:39.268085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.696 [2024-10-06 11:29:39.268093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.957 [2024-10-06 11:29:39.275203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.957 [2024-10-06 11:29:39.275224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.957 [2024-10-06 11:29:39.275232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.957 [2024-10-06 11:29:39.281957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.957 [2024-10-06 11:29:39.281977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.957 [2024-10-06 11:29:39.281985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.957 [2024-10-06 11:29:39.288070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.957 [2024-10-06 11:29:39.288090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.957 [2024-10-06 11:29:39.288097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.957 [2024-10-06 11:29:39.294029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.957 [2024-10-06 11:29:39.294048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.957 [2024-10-06 11:29:39.294056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.957 [2024-10-06 11:29:39.300045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.957 [2024-10-06 11:29:39.300074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.957 [2024-10-06 11:29:39.300082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.957 [2024-10-06 11:29:39.306081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.957 [2024-10-06 11:29:39.306101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.957 [2024-10-06 11:29:39.306109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.957 [2024-10-06 11:29:39.311894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.957 [2024-10-06 11:29:39.311915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.957 [2024-10-06 11:29:39.311923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.957 [2024-10-06 11:29:39.317333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.317353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.317360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.322731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.322751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.322759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.328714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.328734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.328741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.334650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.334670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.334677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.340000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.340020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.340028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.345764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.345784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.345795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.351339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.351360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.351368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.360400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.360420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.360428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.368981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.369000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.369007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.377057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.377081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.377089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.385546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.385566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.385574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.394174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.394193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.394201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.401933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.401953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.401960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.410771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.410791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.410798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.418926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.418950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.418957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.426678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.426699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.426707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.433646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.433666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.433674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.440599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.440619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.440627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.446875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.446896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.446903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.453258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.453278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.453286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.459023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.459044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.459051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.466404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.466424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.466432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.475896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.475916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.475924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.484643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.484664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.484672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.492694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.492714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.492721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.958 [2024-10-06 11:29:39.499869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.958 [2024-10-06 11:29:39.499889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.958 [2024-10-06 11:29:39.499896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.959 [2024-10-06 11:29:39.507873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.959 [2024-10-06 11:29:39.507894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.959 [2024-10-06 11:29:39.507902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.959 [2024-10-06 11:29:39.515021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.959 [2024-10-06 11:29:39.515042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.959 [2024-10-06 11:29:39.515050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.959 [2024-10-06 11:29:39.521444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.959 [2024-10-06 11:29:39.521465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.959 [2024-10-06 11:29:39.521473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.959 [2024-10-06 11:29:39.527502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:41.959 [2024-10-06 11:29:39.527523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.959 [2024-10-06 11:29:39.527531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.219 [2024-10-06 11:29:39.532992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.219 [2024-10-06 11:29:39.533013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.219 [2024-10-06 11:29:39.533021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.219 [2024-10-06 11:29:39.541762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.219 [2024-10-06 11:29:39.541782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.219 [2024-10-06 11:29:39.541793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.219 [2024-10-06 11:29:39.550497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.219 [2024-10-06 11:29:39.550518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.219 [2024-10-06 11:29:39.550526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.219 [2024-10-06 11:29:39.558471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.219 [2024-10-06 11:29:39.558492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.219 [2024-10-06 11:29:39.558500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.219 [2024-10-06 11:29:39.566023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.219 [2024-10-06 11:29:39.566044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.219 [2024-10-06 11:29:39.566052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.219 [2024-10-06 11:29:39.572701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.219 [2024-10-06 11:29:39.572721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.219 [2024-10-06 11:29:39.572729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.219 [2024-10-06 11:29:39.579188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.219 [2024-10-06 11:29:39.579208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.219 [2024-10-06 11:29:39.579216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.219 [2024-10-06 11:29:39.585669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.219 [2024-10-06 11:29:39.585690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.219 [2024-10-06 11:29:39.585698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.219 [2024-10-06 11:29:39.592678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.219 [2024-10-06 11:29:39.592700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.592708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.600219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.600241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.600249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.606803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.606824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.606832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.612731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.612752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.612760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.618501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.618522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.618530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.624298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.624318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.624326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.633993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.634013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.634021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.643018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.643039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.643047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.651851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.651872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.651880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.660292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.660313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.660321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.668661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.668681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.668694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.677459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.677479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.677486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.685641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.685662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.685670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.693489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.693510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.693518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.700490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.700511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.700519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.707963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.707983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.707991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.714209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.714230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.714238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.720657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.720678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.720685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.726770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.726790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.726798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.220 [2024-10-06 11:29:39.732489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c98d90) 00:34:42.220 [2024-10-06 11:29:39.732513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.220 [2024-10-06 11:29:39.732521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.220 4934.00 IOPS, 616.75 MiB/s 00:34:42.220 Latency(us) 00:34:42.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:42.220 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:42.220 nvme0n1 : 2.00 4933.88 616.73 0.00 0.00 3240.19 663.16 11297.16 00:34:42.220 =================================================================================================================== 00:34:42.220 Total : 4933.88 616.73 0.00 0.00 3240.19 663.16 11297.16 00:34:42.220 { 00:34:42.220 "results": [ 00:34:42.220 { 00:34:42.220 "job": "nvme0n1", 00:34:42.220 "core_mask": "0x2", 00:34:42.220 "workload": "randread", 00:34:42.220 "status": "finished", 00:34:42.220 "queue_depth": 16, 00:34:42.220 "io_size": 131072, 00:34:42.220 "runtime": 2.003293, 00:34:42.220 "iops": 4933.876372552592, 00:34:42.220 "mibps": 616.734546569074, 00:34:42.220 "io_failed": 0, 00:34:42.220 "io_timeout": 0, 00:34:42.220 "avg_latency_us": 3240.18910408356, 00:34:42.220 "min_latency_us": 663.1619047619048, 00:34:42.220 "max_latency_us": 11297.158095238095 00:34:42.220 } 00:34:42.220 ], 00:34:42.220 "core_count": 1 00:34:42.220 } 00:34:42.220 11:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:42.220 11:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:42.220 11:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:42.220 | .driver_specific 00:34:42.220 | .nvme_error 00:34:42.220 | .status_code 00:34:42.220 | .command_transient_transport_error' 00:34:42.220 11:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:42.480 11:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 318 > 0 )) 00:34:42.480 11:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2260324 00:34:42.480 11:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2260324 ']' 00:34:42.480 11:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2260324 00:34:42.480 11:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:34:42.480 11:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:42.480 11:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2260324 00:34:42.480 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:42.480 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:42.480 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2260324' 00:34:42.480 killing process with pid 2260324 00:34:42.480 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2260324 00:34:42.480 Received shutdown signal, test time was about 2.000000 seconds 00:34:42.480 00:34:42.480 Latency(us) 00:34:42.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:42.480 =================================================================================================================== 00:34:42.480 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:42.480 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2260324 00:34:42.740 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:42.740 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:42.740 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:42.740 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:42.740 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:42.740 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2260862 00:34:42.740 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2260862 /var/tmp/bperf.sock 00:34:42.740 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:42.740 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2260862 ']' 00:34:42.740 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:42.740 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:42.740 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:42.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:42.740 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:42.740 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:42.740 [2024-10-06 11:29:40.243749] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:34:42.740 [2024-10-06 11:29:40.243802] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2260862 ] 00:34:42.999 [2024-10-06 11:29:40.317129] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.999 [2024-10-06 11:29:40.354993] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:42.999 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:42.999 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:34:42.999 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:42.999 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:43.258 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:43.258 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.258 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:43.258 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.258 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:43.258 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:43.518 nvme0n1 00:34:43.518 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:43.518 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.518 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:43.518 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.518 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:43.518 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:43.518 Running I/O for 2 seconds... 00:34:43.518 [2024-10-06 11:29:40.997982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fac10 00:34:43.518 [2024-10-06 11:29:40.998881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.518 [2024-10-06 11:29:40.998910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:43.518 [2024-10-06 11:29:41.008422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ff3c8 00:34:43.518 [2024-10-06 11:29:41.009354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.518 [2024-10-06 11:29:41.009377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:43.518 [2024-10-06 11:29:41.018029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e01f8 00:34:43.518 [2024-10-06 11:29:41.019038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.518 [2024-10-06 11:29:41.019063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:43.518 [2024-10-06 11:29:41.027718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198feb58 00:34:43.518 [2024-10-06 11:29:41.028988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.518 [2024-10-06 11:29:41.029008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:43.518 [2024-10-06 11:29:41.036537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f0350 00:34:43.518 [2024-10-06 11:29:41.037822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.518 [2024-10-06 11:29:41.037841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:43.518 [2024-10-06 11:29:41.045912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fb480 00:34:43.518 [2024-10-06 11:29:41.047121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.518 [2024-10-06 11:29:41.047139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:43.518 [2024-10-06 11:29:41.055354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f7970 00:34:43.518 [2024-10-06 11:29:41.056561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.518 [2024-10-06 11:29:41.056580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:43.518 [2024-10-06 11:29:41.063926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f7538 00:34:43.518 [2024-10-06 11:29:41.065031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.518 [2024-10-06 11:29:41.065049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:43.518 [2024-10-06 11:29:41.073388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fcdd0 00:34:43.518 [2024-10-06 11:29:41.074452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.518 [2024-10-06 11:29:41.074471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:43.518 [2024-10-06 11:29:41.082891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f5378 00:34:43.518 [2024-10-06 11:29:41.084011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.518 [2024-10-06 11:29:41.084029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:43.518 [2024-10-06 11:29:41.090718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e4578 00:34:43.518 [2024-10-06 11:29:41.091254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.518 [2024-10-06 11:29:41.091272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:43.778 [2024-10-06 11:29:41.100501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fac10 00:34:43.778 [2024-10-06 11:29:41.101170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.778 [2024-10-06 11:29:41.101189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:43.778 [2024-10-06 11:29:41.110136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e8d30 00:34:43.778 [2024-10-06 11:29:41.110886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.778 [2024-10-06 11:29:41.110904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:43.778 [2024-10-06 11:29:41.119466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e1f80 00:34:43.778 [2024-10-06 11:29:41.120508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.778 [2024-10-06 11:29:41.120526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:43.778 [2024-10-06 11:29:41.128908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fcdd0 00:34:43.778 [2024-10-06 11:29:41.130023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.778 [2024-10-06 11:29:41.130042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:43.778 [2024-10-06 11:29:41.136629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ea680 00:34:43.778 [2024-10-06 11:29:41.137140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.778 [2024-10-06 11:29:41.137159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:43.778 [2024-10-06 11:29:41.146173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198dfdc0 00:34:43.778 [2024-10-06 11:29:41.146797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.778 [2024-10-06 11:29:41.146816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:43.778 [2024-10-06 11:29:41.155516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e0ea0 00:34:43.778 [2024-10-06 11:29:41.156435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.778 [2024-10-06 11:29:41.156453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:43.778 [2024-10-06 11:29:41.164917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fef90 00:34:43.778 [2024-10-06 11:29:41.165907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.778 [2024-10-06 11:29:41.165926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:43.778 [2024-10-06 11:29:41.174506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e0630 00:34:43.779 [2024-10-06 11:29:41.175734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.175753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.182003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e23b8 00:34:43.779 [2024-10-06 11:29:41.182627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.182645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.191431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198eaab8 00:34:43.779 [2024-10-06 11:29:41.192313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.192342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.201615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e6fa8 00:34:43.779 [2024-10-06 11:29:41.202799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.202817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.211049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f3e60 00:34:43.779 [2024-10-06 11:29:41.212193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.212211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.218777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198de470 00:34:43.779 [2024-10-06 11:29:41.219288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.219310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.228397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fa7d8 00:34:43.779 [2024-10-06 11:29:41.228998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.229017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.237921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e38d0 00:34:43.779 [2024-10-06 11:29:41.238649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.238668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.246306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e2c28 00:34:43.779 [2024-10-06 11:29:41.247154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.247173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.255461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:43.779 [2024-10-06 11:29:41.256303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.256322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.264704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e6300 00:34:43.779 [2024-10-06 11:29:41.265218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.265236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.274581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ebb98 00:34:43.779 [2024-10-06 11:29:41.275221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.275239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.282955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fa3a0 00:34:43.779 [2024-10-06 11:29:41.283667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.283684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.292525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198eb328 00:34:43.779 [2024-10-06 11:29:41.293500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.293518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.302173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e23b8 00:34:43.779 [2024-10-06 11:29:41.303214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.303233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.311732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f46d0 00:34:43.779 [2024-10-06 11:29:41.312928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.312947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.319834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fa3a0 00:34:43.779 [2024-10-06 11:29:41.320332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.320350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.329106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fef90 00:34:43.779 [2024-10-06 11:29:41.329936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.329953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.338146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f81e0 00:34:43.779 [2024-10-06 11:29:41.338601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.338619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:43.779 [2024-10-06 11:29:41.347932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198eea00 00:34:43.779 [2024-10-06 11:29:41.349473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:43.779 [2024-10-06 11:29:41.349491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:44.039 [2024-10-06 11:29:41.357621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198df550 00:34:44.039 [2024-10-06 11:29:41.358690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.039 [2024-10-06 11:29:41.358708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:44.039 [2024-10-06 11:29:41.366725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198efae0 00:34:44.039 [2024-10-06 11:29:41.367754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.039 [2024-10-06 11:29:41.367773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:44.039 [2024-10-06 11:29:41.375405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f6458 00:34:44.039 [2024-10-06 11:29:41.376443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.039 [2024-10-06 11:29:41.376462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:44.039 [2024-10-06 11:29:41.386310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f6cc8 00:34:44.039 [2024-10-06 11:29:41.387797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.039 [2024-10-06 11:29:41.387814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:44.039 [2024-10-06 11:29:41.394402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e7c50 00:34:44.039 [2024-10-06 11:29:41.395204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.039 [2024-10-06 11:29:41.395223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:44.039 [2024-10-06 11:29:41.403669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fb048 00:34:44.039 [2024-10-06 11:29:41.404833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.039 [2024-10-06 11:29:41.404852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:44.039 [2024-10-06 11:29:41.412964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f31b8 00:34:44.039 [2024-10-06 11:29:41.414043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.039 [2024-10-06 11:29:41.414064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:44.039 [2024-10-06 11:29:41.422121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f4b08 00:34:44.039 [2024-10-06 11:29:41.423166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.039 [2024-10-06 11:29:41.423184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:44.039 [2024-10-06 11:29:41.431452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ee5c8 00:34:44.039 [2024-10-06 11:29:41.432523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.039 [2024-10-06 11:29:41.432543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:44.039 [2024-10-06 11:29:41.440605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e1710 00:34:44.039 [2024-10-06 11:29:41.441696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.039 [2024-10-06 11:29:41.441714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:44.039 [2024-10-06 11:29:41.449786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e0630 00:34:44.039 [2024-10-06 11:29:41.450836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.039 [2024-10-06 11:29:41.450854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:44.039 [2024-10-06 11:29:41.458935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ea680 00:34:44.039 [2024-10-06 11:29:41.459979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.040 [2024-10-06 11:29:41.460001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:44.040 [2024-10-06 11:29:41.468414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f81e0 00:34:44.040 [2024-10-06 11:29:41.469571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.040 [2024-10-06 11:29:41.469590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:44.040 [2024-10-06 11:29:41.476148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ed0b0 00:34:44.040 [2024-10-06 11:29:41.476704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.040 [2024-10-06 11:29:41.476723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:44.040 [2024-10-06 11:29:41.484361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f6cc8 00:34:44.040 [2024-10-06 11:29:41.485034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.040 [2024-10-06 11:29:41.485052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:44.040 [2024-10-06 11:29:41.494105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e4140 00:34:44.040 [2024-10-06 11:29:41.494915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.040 [2024-10-06 11:29:41.494933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:44.040 [2024-10-06 11:29:41.503872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f0bc0 00:34:44.040 [2024-10-06 11:29:41.504810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.040 [2024-10-06 11:29:41.504828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:44.040 [2024-10-06 11:29:41.513500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fda78 00:34:44.040 [2024-10-06 11:29:41.514571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.040 [2024-10-06 11:29:41.514590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:44.040 [2024-10-06 11:29:41.523320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198df988 00:34:44.040 [2024-10-06 11:29:41.524515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.040 [2024-10-06 11:29:41.524535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:44.040 [2024-10-06 11:29:41.533083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e4140 00:34:44.040 [2024-10-06 11:29:41.534407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.040 [2024-10-06 11:29:41.534426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:44.040 [2024-10-06 11:29:41.542826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198feb58 00:34:44.040 [2024-10-06 11:29:41.544256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.040 [2024-10-06 11:29:41.544275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:44.040 [2024-10-06 11:29:41.552469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e0a68 00:34:44.040 [2024-10-06 11:29:41.554027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.040 [2024-10-06 11:29:41.554046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:44.040 [2024-10-06 11:29:41.558957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f3e60 00:34:44.040 [2024-10-06 11:29:41.559639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.040 [2024-10-06 11:29:41.559657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:44.040 [2024-10-06 11:29:41.568554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ed0b0 00:34:44.040 [2024-10-06 11:29:41.569379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.040 [2024-10-06 11:29:41.569398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:44.040 [2024-10-06 11:29:41.578150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f92c0 00:34:44.040 [2024-10-06 11:29:41.579102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.040 [2024-10-06 11:29:41.579121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:44.040 [2024-10-06 11:29:41.586794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e99d8 00:34:44.040 [2024-10-06 11:29:41.587711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.040 [2024-10-06 11:29:41.587729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:44.040 [2024-10-06 11:29:41.596354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198dfdc0 00:34:44.040 [2024-10-06 11:29:41.597411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.040 [2024-10-06 11:29:41.597430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:44.040 [2024-10-06 11:29:41.605955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e1f80 00:34:44.040 [2024-10-06 11:29:41.607166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.040 [2024-10-06 11:29:41.607186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:44.300 [2024-10-06 11:29:41.615818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f6020 00:34:44.300 [2024-10-06 11:29:41.617143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.300 [2024-10-06 11:29:41.617162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:44.300 [2024-10-06 11:29:41.625498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198df550 00:34:44.300 [2024-10-06 11:29:41.626932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.300 [2024-10-06 11:29:41.626951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:44.300 [2024-10-06 11:29:41.635119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ed920 00:34:44.300 [2024-10-06 11:29:41.636656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.300 [2024-10-06 11:29:41.636674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:44.300 [2024-10-06 11:29:41.641603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f31b8 00:34:44.300 [2024-10-06 11:29:41.642309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.300 [2024-10-06 11:29:41.642328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:44.300 [2024-10-06 11:29:41.651238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e9e10 00:34:44.300 [2024-10-06 11:29:41.652075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.300 [2024-10-06 11:29:41.652095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:44.300 [2024-10-06 11:29:41.660837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e0a68 00:34:44.300 [2024-10-06 11:29:41.661796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.300 [2024-10-06 11:29:41.661815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:44.300 [2024-10-06 11:29:41.670424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e88f8 00:34:44.300 [2024-10-06 11:29:41.671463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.300 [2024-10-06 11:29:41.671482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:44.300 [2024-10-06 11:29:41.679659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fb048 00:34:44.300 [2024-10-06 11:29:41.680723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.300 [2024-10-06 11:29:41.680743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:44.300 [2024-10-06 11:29:41.689093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e6b70 00:34:44.301 [2024-10-06 11:29:41.690290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.690309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.696671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fb8b8 00:34:44.301 [2024-10-06 11:29:41.697341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.697365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.705750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198df118 00:34:44.301 [2024-10-06 11:29:41.706438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.706457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.714126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e5220 00:34:44.301 [2024-10-06 11:29:41.714802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.714821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.723484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e38d0 00:34:44.301 [2024-10-06 11:29:41.724210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.724229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.732784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f6020 00:34:44.301 [2024-10-06 11:29:41.733530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.733549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.742875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fe720 00:34:44.301 [2024-10-06 11:29:41.743930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.743949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.752486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fb048 00:34:44.301 [2024-10-06 11:29:41.753674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.753692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.761016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e4140 00:34:44.301 [2024-10-06 11:29:41.761908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.761926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.770310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f3a28 00:34:44.301 [2024-10-06 11:29:41.770989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.771008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.781622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f6020 00:34:44.301 [2024-10-06 11:29:41.783270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.783289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.788222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ea248 00:34:44.301 [2024-10-06 11:29:41.788975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.788994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.797546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f6cc8 00:34:44.301 [2024-10-06 11:29:41.798287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.798305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.806743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f6890 00:34:44.301 [2024-10-06 11:29:41.807579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.807599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.816064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e6fa8 00:34:44.301 [2024-10-06 11:29:41.816837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.816855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.826514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:44.301 [2024-10-06 11:29:41.827656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.827675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.835631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:44.301 [2024-10-06 11:29:41.836739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.836757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.844779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:44.301 [2024-10-06 11:29:41.845870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.845889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.853925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:44.301 [2024-10-06 11:29:41.855044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.855070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.863051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:44.301 [2024-10-06 11:29:41.864207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.864226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.301 [2024-10-06 11:29:41.872315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:44.301 [2024-10-06 11:29:41.873354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.301 [2024-10-06 11:29:41.873374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.561 [2024-10-06 11:29:41.881638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:44.561 [2024-10-06 11:29:41.882785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.561 [2024-10-06 11:29:41.882806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.561 [2024-10-06 11:29:41.890824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:44.561 [2024-10-06 11:29:41.891934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.561 [2024-10-06 11:29:41.891953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.561 [2024-10-06 11:29:41.900046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:44.561 [2024-10-06 11:29:41.901206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.561 [2024-10-06 11:29:41.901225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.561 [2024-10-06 11:29:41.909184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:44.561 [2024-10-06 11:29:41.910325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.561 [2024-10-06 11:29:41.910343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.561 [2024-10-06 11:29:41.918271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:44.561 [2024-10-06 11:29:41.919386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.561 [2024-10-06 11:29:41.919404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.561 [2024-10-06 11:29:41.927428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:44.561 [2024-10-06 11:29:41.928583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.561 [2024-10-06 11:29:41.928602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.561 [2024-10-06 11:29:41.936563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:44.561 [2024-10-06 11:29:41.937729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.561 [2024-10-06 11:29:41.937751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.561 [2024-10-06 11:29:41.945697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:44.561 [2024-10-06 11:29:41.946802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.561 [2024-10-06 11:29:41.946822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.561 [2024-10-06 11:29:41.954794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:44.561 [2024-10-06 11:29:41.955905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.561 [2024-10-06 11:29:41.955924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.561 [2024-10-06 11:29:41.963925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:44.561 [2024-10-06 11:29:41.964974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.561 [2024-10-06 11:29:41.964994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.561 [2024-10-06 11:29:41.973116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:44.561 [2024-10-06 11:29:41.974166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.561 [2024-10-06 11:29:41.974186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.561 [2024-10-06 11:29:41.982274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:44.561 [2024-10-06 11:29:41.983281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.561 [2024-10-06 11:29:41.983299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.561 27661.00 IOPS, 108.05 MiB/s [2024-10-06 11:29:41.991333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198dece0 00:34:44.561 [2024-10-06 11:29:41.992346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.561 [2024-10-06 11:29:41.992365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.561 [2024-10-06 11:29:42.000524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fef90 00:34:44.561 [2024-10-06 11:29:42.001656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.561 [2024-10-06 11:29:42.001674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.561 [2024-10-06 11:29:42.009807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198eaab8 00:34:44.562 [2024-10-06 11:29:42.010931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.562 [2024-10-06 11:29:42.010949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.562 [2024-10-06 11:29:42.018943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fcdd0 00:34:44.562 [2024-10-06 11:29:42.020044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.562 [2024-10-06 11:29:42.020067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.562 [2024-10-06 11:29:42.027985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f1430 00:34:44.562 [2024-10-06 11:29:42.029102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.562 [2024-10-06 11:29:42.029121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.562 [2024-10-06 11:29:42.037322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ed920 00:34:44.562 [2024-10-06 11:29:42.038482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.562 [2024-10-06 11:29:42.038501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.562 [2024-10-06 11:29:42.046606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e99d8 00:34:44.562 [2024-10-06 11:29:42.047736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.562 [2024-10-06 11:29:42.047755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.562 [2024-10-06 11:29:42.055917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fa3a0 00:34:44.562 [2024-10-06 11:29:42.057049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.562 [2024-10-06 11:29:42.057071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.562 [2024-10-06 11:29:42.065093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f92c0 00:34:44.562 [2024-10-06 11:29:42.066176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.562 [2024-10-06 11:29:42.066195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.562 [2024-10-06 11:29:42.074243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e1f80 00:34:44.562 [2024-10-06 11:29:42.075371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.562 [2024-10-06 11:29:42.075389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.562 [2024-10-06 11:29:42.083400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ee190 00:34:44.562 [2024-10-06 11:29:42.084524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.562 [2024-10-06 11:29:42.084543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.562 [2024-10-06 11:29:42.092544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198df118 00:34:44.562 [2024-10-06 11:29:42.093670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.562 [2024-10-06 11:29:42.093688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.562 [2024-10-06 11:29:42.101682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198eb328 00:34:44.562 [2024-10-06 11:29:42.102841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.562 [2024-10-06 11:29:42.102859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.562 [2024-10-06 11:29:42.110853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f0ff8 00:34:44.562 [2024-10-06 11:29:42.111966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.562 [2024-10-06 11:29:42.111984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.562 [2024-10-06 11:29:42.119983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f3a28 00:34:44.562 [2024-10-06 11:29:42.121097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.562 [2024-10-06 11:29:42.121115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.562 [2024-10-06 11:29:42.129119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e5658 00:34:44.562 [2024-10-06 11:29:42.130260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.562 [2024-10-06 11:29:42.130278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.138511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e38d0 00:34:44.822 [2024-10-06 11:29:42.139630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.822 [2024-10-06 11:29:42.139649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.147705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f8a50 00:34:44.822 [2024-10-06 11:29:42.148839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.822 [2024-10-06 11:29:42.148857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.156874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f8618 00:34:44.822 [2024-10-06 11:29:42.157986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.822 [2024-10-06 11:29:42.158004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.165998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198eea00 00:34:44.822 [2024-10-06 11:29:42.167106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.822 [2024-10-06 11:29:42.167124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.175136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e6300 00:34:44.822 [2024-10-06 11:29:42.176266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.822 [2024-10-06 11:29:42.176288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.184268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e1710 00:34:44.822 [2024-10-06 11:29:42.185367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.822 [2024-10-06 11:29:42.185385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.193339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fdeb0 00:34:44.822 [2024-10-06 11:29:42.194437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.822 [2024-10-06 11:29:42.194455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.202435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e5220 00:34:44.822 [2024-10-06 11:29:42.203509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.822 [2024-10-06 11:29:42.203528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.211531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ef270 00:34:44.822 [2024-10-06 11:29:42.212657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.822 [2024-10-06 11:29:42.212676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.220679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e84c0 00:34:44.822 [2024-10-06 11:29:42.221798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.822 [2024-10-06 11:29:42.221817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.229816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e23b8 00:34:44.822 [2024-10-06 11:29:42.230932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.822 [2024-10-06 11:29:42.230951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.238954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f5be8 00:34:44.822 [2024-10-06 11:29:42.240082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.822 [2024-10-06 11:29:42.240100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.248124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f4b08 00:34:44.822 [2024-10-06 11:29:42.249251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.822 [2024-10-06 11:29:42.249270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.257378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f1ca0 00:34:44.822 [2024-10-06 11:29:42.258460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.822 [2024-10-06 11:29:42.258479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.266531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ddc00 00:34:44.822 [2024-10-06 11:29:42.267634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.822 [2024-10-06 11:29:42.267653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.275698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f9b30 00:34:44.822 [2024-10-06 11:29:42.276915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.822 [2024-10-06 11:29:42.276933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.284920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ebb98 00:34:44.822 [2024-10-06 11:29:42.286042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.822 [2024-10-06 11:29:42.286063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.294275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198dece0 00:34:44.822 [2024-10-06 11:29:42.295406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.822 [2024-10-06 11:29:42.295425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.822 [2024-10-06 11:29:42.303536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fef90 00:34:44.822 [2024-10-06 11:29:42.304640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.823 [2024-10-06 11:29:42.304659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.823 [2024-10-06 11:29:42.312814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198eaab8 00:34:44.823 [2024-10-06 11:29:42.313920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.823 [2024-10-06 11:29:42.313939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.823 [2024-10-06 11:29:42.322101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fcdd0 00:34:44.823 [2024-10-06 11:29:42.323229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.823 [2024-10-06 11:29:42.323247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.823 [2024-10-06 11:29:42.331270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f1430 00:34:44.823 [2024-10-06 11:29:42.332382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.823 [2024-10-06 11:29:42.332401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.823 [2024-10-06 11:29:42.340335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ed920 00:34:44.823 [2024-10-06 11:29:42.341492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.823 [2024-10-06 11:29:42.341511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.823 [2024-10-06 11:29:42.349468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e99d8 00:34:44.823 [2024-10-06 11:29:42.350542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.823 [2024-10-06 11:29:42.350561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.823 [2024-10-06 11:29:42.358563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fa3a0 00:34:44.823 [2024-10-06 11:29:42.359641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.823 [2024-10-06 11:29:42.359659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.823 [2024-10-06 11:29:42.367703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f92c0 00:34:44.823 [2024-10-06 11:29:42.368839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.823 [2024-10-06 11:29:42.368857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.823 [2024-10-06 11:29:42.376854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e1f80 00:34:44.823 [2024-10-06 11:29:42.377982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.823 [2024-10-06 11:29:42.378000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.823 [2024-10-06 11:29:42.386001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ee190 00:34:44.823 [2024-10-06 11:29:42.387076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.823 [2024-10-06 11:29:42.387094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.823 [2024-10-06 11:29:42.395391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198df118 00:34:45.083 [2024-10-06 11:29:42.396456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.083 [2024-10-06 11:29:42.396475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.083 [2024-10-06 11:29:42.404684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198eb328 00:34:45.083 [2024-10-06 11:29:42.405805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.083 [2024-10-06 11:29:42.405824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.083 [2024-10-06 11:29:42.413854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f0ff8 00:34:45.083 [2024-10-06 11:29:42.414930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.083 [2024-10-06 11:29:42.414947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.083 [2024-10-06 11:29:42.422993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f3a28 00:34:45.083 [2024-10-06 11:29:42.424081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.083 [2024-10-06 11:29:42.424115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.083 [2024-10-06 11:29:42.432080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e5658 00:34:45.083 [2024-10-06 11:29:42.433255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.083 [2024-10-06 11:29:42.433274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.083 [2024-10-06 11:29:42.441237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e38d0 00:34:45.083 [2024-10-06 11:29:42.442337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.083 [2024-10-06 11:29:42.442354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.083 [2024-10-06 11:29:42.450356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f8a50 00:34:45.083 [2024-10-06 11:29:42.451444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.083 [2024-10-06 11:29:42.451461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.083 [2024-10-06 11:29:42.459511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f8618 00:34:45.083 [2024-10-06 11:29:42.460550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.083 [2024-10-06 11:29:42.460569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.083 [2024-10-06 11:29:42.468636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198eea00 00:34:45.083 [2024-10-06 11:29:42.469759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.083 [2024-10-06 11:29:42.469777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.083 [2024-10-06 11:29:42.477790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e6300 00:34:45.083 [2024-10-06 11:29:42.478867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.083 [2024-10-06 11:29:42.478885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.083 [2024-10-06 11:29:42.486916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e1710 00:34:45.083 [2024-10-06 11:29:42.487992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.083 [2024-10-06 11:29:42.488011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.083 [2024-10-06 11:29:42.496084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fdeb0 00:34:45.083 [2024-10-06 11:29:42.497215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.083 [2024-10-06 11:29:42.497236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.083 [2024-10-06 11:29:42.505273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e5220 00:34:45.083 [2024-10-06 11:29:42.506380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.083 [2024-10-06 11:29:42.506398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.083 [2024-10-06 11:29:42.514377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ef270 00:34:45.083 [2024-10-06 11:29:42.515451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.083 [2024-10-06 11:29:42.515469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.083 [2024-10-06 11:29:42.523528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e84c0 00:34:45.083 [2024-10-06 11:29:42.524567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.083 [2024-10-06 11:29:42.524586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.083 [2024-10-06 11:29:42.532666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e23b8 00:34:45.083 [2024-10-06 11:29:42.533810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.083 [2024-10-06 11:29:42.533829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.083 [2024-10-06 11:29:42.541772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f5be8 00:34:45.083 [2024-10-06 11:29:42.542925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.083 [2024-10-06 11:29:42.542943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.083 [2024-10-06 11:29:42.551142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f4b08 00:34:45.083 [2024-10-06 11:29:42.552301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.083 [2024-10-06 11:29:42.552319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.083 [2024-10-06 11:29:42.560403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f1ca0 00:34:45.084 [2024-10-06 11:29:42.561544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.084 [2024-10-06 11:29:42.561562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.084 [2024-10-06 11:29:42.569690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ddc00 00:34:45.084 [2024-10-06 11:29:42.570810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.084 [2024-10-06 11:29:42.570829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.084 [2024-10-06 11:29:42.578823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f9b30 00:34:45.084 [2024-10-06 11:29:42.579962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.084 [2024-10-06 11:29:42.579981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.084 [2024-10-06 11:29:42.587972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ebb98 00:34:45.084 [2024-10-06 11:29:42.589117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.084 [2024-10-06 11:29:42.589136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.084 [2024-10-06 11:29:42.597135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198dece0 00:34:45.084 [2024-10-06 11:29:42.598184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.084 [2024-10-06 11:29:42.598203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.084 [2024-10-06 11:29:42.606272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fef90 00:34:45.084 [2024-10-06 11:29:42.607376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.084 [2024-10-06 11:29:42.607394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.084 [2024-10-06 11:29:42.615432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198eaab8 00:34:45.084 [2024-10-06 11:29:42.616468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.084 [2024-10-06 11:29:42.616487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.084 [2024-10-06 11:29:42.624906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f4f40 00:34:45.084 [2024-10-06 11:29:42.626158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.084 [2024-10-06 11:29:42.626175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.084 [2024-10-06 11:29:42.632854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e2c28 00:34:45.084 [2024-10-06 11:29:42.634299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.084 [2024-10-06 11:29:42.634316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:45.084 [2024-10-06 11:29:42.642200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fa3a0 00:34:45.084 [2024-10-06 11:29:42.643269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.084 [2024-10-06 11:29:42.643287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:45.084 [2024-10-06 11:29:42.653129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ecc78 00:34:45.084 [2024-10-06 11:29:42.654707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.084 [2024-10-06 11:29:42.654725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.659702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198de8a8 00:34:45.344 [2024-10-06 11:29:42.660438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.660457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.669028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198df550 00:34:45.344 [2024-10-06 11:29:42.669696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.669715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.678167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198dfdc0 00:34:45.344 [2024-10-06 11:29:42.678789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.678808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.687282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f7100 00:34:45.344 [2024-10-06 11:29:42.687908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.687928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.696442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f3e60 00:34:45.344 [2024-10-06 11:29:42.697097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.697116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.705588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f5378 00:34:45.344 [2024-10-06 11:29:42.706217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.706235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.714713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f6020 00:34:45.344 [2024-10-06 11:29:42.715345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.715364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.723278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fe720 00:34:45.344 [2024-10-06 11:29:42.723981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.723999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.732876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e1b48 00:34:45.344 [2024-10-06 11:29:42.733727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.733748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.743756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e6b70 00:34:45.344 [2024-10-06 11:29:42.745083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.745102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.751862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198de470 00:34:45.344 [2024-10-06 11:29:42.752497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.752516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.761212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e3d08 00:34:45.344 [2024-10-06 11:29:42.762168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.762187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.770228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f0350 00:34:45.344 [2024-10-06 11:29:42.770822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.770840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.779435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ed920 00:34:45.344 [2024-10-06 11:29:42.780292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.780310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.787984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e1f80 00:34:45.344 [2024-10-06 11:29:42.788935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.788953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.797604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fa7d8 00:34:45.344 [2024-10-06 11:29:42.798686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.798704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.807119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fc560 00:34:45.344 [2024-10-06 11:29:42.808190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.808209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.816668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fe2e8 00:34:45.344 [2024-10-06 11:29:42.817746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.817765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.825347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ec840 00:34:45.344 [2024-10-06 11:29:42.826277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.344 [2024-10-06 11:29:42.826296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:45.344 [2024-10-06 11:29:42.833865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e38d0 00:34:45.344 [2024-10-06 11:29:42.834699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.345 [2024-10-06 11:29:42.834718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:45.345 [2024-10-06 11:29:42.843254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fdeb0 00:34:45.345 [2024-10-06 11:29:42.844043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.345 [2024-10-06 11:29:42.844064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:45.345 [2024-10-06 11:29:42.852343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f3e60 00:34:45.345 [2024-10-06 11:29:42.852806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.345 [2024-10-06 11:29:42.852825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:45.345 [2024-10-06 11:29:42.861930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e95a0 00:34:45.345 [2024-10-06 11:29:42.862553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.345 [2024-10-06 11:29:42.862571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:45.345 [2024-10-06 11:29:42.870268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f5be8 00:34:45.345 [2024-10-06 11:29:42.871074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.345 [2024-10-06 11:29:42.871092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:45.345 [2024-10-06 11:29:42.880486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198df550 00:34:45.345 [2024-10-06 11:29:42.881337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.345 [2024-10-06 11:29:42.881356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:45.345 [2024-10-06 11:29:42.889903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e88f8 00:34:45.345 [2024-10-06 11:29:42.890967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.345 [2024-10-06 11:29:42.890986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:45.345 [2024-10-06 11:29:42.898655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e6300 00:34:45.345 [2024-10-06 11:29:42.899682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.345 [2024-10-06 11:29:42.899701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:45.345 [2024-10-06 11:29:42.908301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ec408 00:34:45.345 [2024-10-06 11:29:42.909394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.345 [2024-10-06 11:29:42.909414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:45.604 [2024-10-06 11:29:42.918232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e12d8 00:34:45.604 [2024-10-06 11:29:42.919448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.604 [2024-10-06 11:29:42.919467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:45.604 [2024-10-06 11:29:42.927965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198eaab8 00:34:45.604 [2024-10-06 11:29:42.929289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.604 [2024-10-06 11:29:42.929308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:45.605 [2024-10-06 11:29:42.937529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198ea248 00:34:45.605 [2024-10-06 11:29:42.939009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.605 [2024-10-06 11:29:42.939027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:45.605 [2024-10-06 11:29:42.944038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198fd208 00:34:45.605 [2024-10-06 11:29:42.944713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.605 [2024-10-06 11:29:42.944732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:45.605 [2024-10-06 11:29:42.953555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198eaef0 00:34:45.605 [2024-10-06 11:29:42.954149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.605 [2024-10-06 11:29:42.954168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:45.605 [2024-10-06 11:29:42.962092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e4de8 00:34:45.605 [2024-10-06 11:29:42.962754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.605 [2024-10-06 11:29:42.962772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:45.605 [2024-10-06 11:29:42.971683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f0788 00:34:45.605 [2024-10-06 11:29:42.972411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.605 [2024-10-06 11:29:42.972432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:45.605 [2024-10-06 11:29:42.981926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198e95a0 00:34:45.605 [2024-10-06 11:29:42.982792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.605 [2024-10-06 11:29:42.982812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:45.605 27748.00 IOPS, 108.39 MiB/s [2024-10-06 11:29:42.991025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f37e0) with pdu=0x2000198f4298 00:34:45.605 [2024-10-06 11:29:42.991846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.605 [2024-10-06 11:29:42.991864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:45.605 00:34:45.605 Latency(us) 00:34:45.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.605 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:45.605 nvme0n1 : 2.00 27765.88 108.46 0.00 0.00 4605.22 1771.03 11047.50 00:34:45.605 =================================================================================================================== 00:34:45.605 Total : 27765.88 108.46 0.00 0.00 4605.22 1771.03 11047.50 00:34:45.605 { 00:34:45.605 "results": [ 00:34:45.605 { 00:34:45.605 "job": "nvme0n1", 00:34:45.605 "core_mask": "0x2", 00:34:45.605 "workload": "randwrite", 00:34:45.605 "status": "finished", 00:34:45.605 "queue_depth": 128, 00:34:45.605 "io_size": 4096, 00:34:45.605 "runtime": 2.003322, 00:34:45.605 "iops": 27765.88087187182, 00:34:45.605 "mibps": 108.4604721557493, 00:34:45.605 "io_failed": 0, 00:34:45.605 "io_timeout": 0, 00:34:45.605 "avg_latency_us": 4605.2162303356545, 00:34:45.605 "min_latency_us": 1771.032380952381, 00:34:45.605 "max_latency_us": 11047.497142857143 00:34:45.605 } 00:34:45.605 ], 00:34:45.605 "core_count": 1 00:34:45.605 } 00:34:45.605 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:45.605 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:45.605 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:45.605 | .driver_specific 00:34:45.605 | .nvme_error 00:34:45.605 | .status_code 00:34:45.605 | .command_transient_transport_error' 00:34:45.605 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:45.865 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:34:45.865 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2260862 00:34:45.865 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2260862 ']' 00:34:45.865 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2260862 00:34:45.865 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:34:45.865 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:45.865 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2260862 00:34:45.865 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:45.865 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:45.865 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2260862' 00:34:45.865 killing process with pid 2260862 00:34:45.865 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2260862 00:34:45.865 Received shutdown signal, test time was about 2.000000 seconds 00:34:45.865 00:34:45.865 Latency(us) 00:34:45.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.865 =================================================================================================================== 00:34:45.865 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:45.865 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2260862 00:34:46.124 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:46.124 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:46.124 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:46.124 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:46.124 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:46.124 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2261451 00:34:46.124 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2261451 /var/tmp/bperf.sock 00:34:46.124 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:46.124 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2261451 ']' 00:34:46.124 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:46.124 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:46.124 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:46.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:46.124 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:46.124 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.124 [2024-10-06 11:29:43.492218] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:34:46.125 [2024-10-06 11:29:43.492264] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2261451 ] 00:34:46.125 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:46.125 Zero copy mechanism will not be used. 00:34:46.125 [2024-10-06 11:29:43.547856] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.125 [2024-10-06 11:29:43.588110] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:46.125 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:46.125 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:34:46.125 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:46.125 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:46.384 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:46.384 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.384 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.384 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.384 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:46.384 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:46.953 nvme0n1 00:34:46.953 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:46.953 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.953 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.953 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.953 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:46.953 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:46.953 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:46.953 Zero copy mechanism will not be used. 00:34:46.953 Running I/O for 2 seconds... 00:34:46.953 [2024-10-06 11:29:44.401659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.953 [2024-10-06 11:29:44.401958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.953 [2024-10-06 11:29:44.401984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:46.953 [2024-10-06 11:29:44.409679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.953 [2024-10-06 11:29:44.409947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.953 [2024-10-06 11:29:44.409971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:46.953 [2024-10-06 11:29:44.415497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.953 [2024-10-06 11:29:44.415766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.953 [2024-10-06 11:29:44.415787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:46.953 [2024-10-06 11:29:44.420910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.953 [2024-10-06 11:29:44.421178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.953 [2024-10-06 11:29:44.421199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.953 [2024-10-06 11:29:44.425653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.953 [2024-10-06 11:29:44.425908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.953 [2024-10-06 11:29:44.425929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:46.953 [2024-10-06 11:29:44.430778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.953 [2024-10-06 11:29:44.431040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.953 [2024-10-06 11:29:44.431067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:46.953 [2024-10-06 11:29:44.435659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.953 [2024-10-06 11:29:44.435911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.953 [2024-10-06 11:29:44.435930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:46.953 [2024-10-06 11:29:44.440523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.953 [2024-10-06 11:29:44.440798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.953 [2024-10-06 11:29:44.440819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.953 [2024-10-06 11:29:44.445440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.954 [2024-10-06 11:29:44.445674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.954 [2024-10-06 11:29:44.445695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:46.954 [2024-10-06 11:29:44.451137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.954 [2024-10-06 11:29:44.451384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.954 [2024-10-06 11:29:44.451404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:46.954 [2024-10-06 11:29:44.456791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.954 [2024-10-06 11:29:44.457024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.954 [2024-10-06 11:29:44.457045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:46.954 [2024-10-06 11:29:44.461872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.954 [2024-10-06 11:29:44.462119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.954 [2024-10-06 11:29:44.462139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.954 [2024-10-06 11:29:44.466943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.954 [2024-10-06 11:29:44.467209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.954 [2024-10-06 11:29:44.467230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:46.954 [2024-10-06 11:29:44.471834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.954 [2024-10-06 11:29:44.472086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.954 [2024-10-06 11:29:44.472111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:46.954 [2024-10-06 11:29:44.476500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.954 [2024-10-06 11:29:44.476738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.954 [2024-10-06 11:29:44.476757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:46.954 [2024-10-06 11:29:44.481317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.954 [2024-10-06 11:29:44.481554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.954 [2024-10-06 11:29:44.481574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.954 [2024-10-06 11:29:44.486747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.954 [2024-10-06 11:29:44.486982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.954 [2024-10-06 11:29:44.487002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:46.954 [2024-10-06 11:29:44.492963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.954 [2024-10-06 11:29:44.493205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.954 [2024-10-06 11:29:44.493225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:46.954 [2024-10-06 11:29:44.498007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.954 [2024-10-06 11:29:44.498249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.954 [2024-10-06 11:29:44.498269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:46.954 [2024-10-06 11:29:44.503150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.954 [2024-10-06 11:29:44.503398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.954 [2024-10-06 11:29:44.503418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.954 [2024-10-06 11:29:44.508550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.954 [2024-10-06 11:29:44.508797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.954 [2024-10-06 11:29:44.508816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:46.954 [2024-10-06 11:29:44.514292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.954 [2024-10-06 11:29:44.514531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.954 [2024-10-06 11:29:44.514551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:46.954 [2024-10-06 11:29:44.519962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.954 [2024-10-06 11:29:44.520233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.954 [2024-10-06 11:29:44.520252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:46.954 [2024-10-06 11:29:44.525451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:46.954 [2024-10-06 11:29:44.525696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.954 [2024-10-06 11:29:44.525716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.213 [2024-10-06 11:29:44.531103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.213 [2024-10-06 11:29:44.531393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.213 [2024-10-06 11:29:44.531413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.213 [2024-10-06 11:29:44.537696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.213 [2024-10-06 11:29:44.538052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.213 [2024-10-06 11:29:44.538081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.213 [2024-10-06 11:29:44.545770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.213 [2024-10-06 11:29:44.546072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.213 [2024-10-06 11:29:44.546091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.213 [2024-10-06 11:29:44.552067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.213 [2024-10-06 11:29:44.552351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.213 [2024-10-06 11:29:44.552370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.213 [2024-10-06 11:29:44.558121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.213 [2024-10-06 11:29:44.558372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.213 [2024-10-06 11:29:44.558392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.213 [2024-10-06 11:29:44.563548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.213 [2024-10-06 11:29:44.563796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.213 [2024-10-06 11:29:44.563817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.213 [2024-10-06 11:29:44.569292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.213 [2024-10-06 11:29:44.569538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.213 [2024-10-06 11:29:44.569558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.213 [2024-10-06 11:29:44.574406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.213 [2024-10-06 11:29:44.574652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.213 [2024-10-06 11:29:44.574671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.213 [2024-10-06 11:29:44.579483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.213 [2024-10-06 11:29:44.579727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.213 [2024-10-06 11:29:44.579747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.213 [2024-10-06 11:29:44.584433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.213 [2024-10-06 11:29:44.584676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.213 [2024-10-06 11:29:44.584695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.213 [2024-10-06 11:29:44.589811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.213 [2024-10-06 11:29:44.590052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.213 [2024-10-06 11:29:44.590077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.213 [2024-10-06 11:29:44.594677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.213 [2024-10-06 11:29:44.594921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.213 [2024-10-06 11:29:44.594941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.213 [2024-10-06 11:29:44.599830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.213 [2024-10-06 11:29:44.600076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.213 [2024-10-06 11:29:44.600095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.213 [2024-10-06 11:29:44.604952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.213 [2024-10-06 11:29:44.605195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.213 [2024-10-06 11:29:44.605214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.610465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.610712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.610732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.615401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.615641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.615665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.620294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.620538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.620558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.625275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.625515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.625534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.629865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.630125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.630144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.634632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.634878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.634897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.639745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.639984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.640003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.645299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.645536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.645555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.651189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.651428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.651447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.656523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.656770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.656797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.662203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.662453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.662472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.668240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.668487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.668507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.674552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.674800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.674820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.680511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.680759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.680779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.686412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.686658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.686678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.692508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.692750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.692770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.698020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.698281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.698301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.703355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.703596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.703615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.708122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.708373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.708392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.712916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.713159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.713178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.718194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.718430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.718449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.722911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.723171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.723190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.727666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.727909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.727929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.732539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.732777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.732796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.737008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.737249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.737268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.741234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.741481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.741500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.745434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.745680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.745700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.750018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.750264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.750291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.754593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.754829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.754849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.758867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.214 [2024-10-06 11:29:44.759112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.214 [2024-10-06 11:29:44.759132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.214 [2024-10-06 11:29:44.763048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.215 [2024-10-06 11:29:44.763319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.215 [2024-10-06 11:29:44.763339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.215 [2024-10-06 11:29:44.767357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.215 [2024-10-06 11:29:44.767602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.215 [2024-10-06 11:29:44.767622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.215 [2024-10-06 11:29:44.771552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.215 [2024-10-06 11:29:44.771799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.215 [2024-10-06 11:29:44.771818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.215 [2024-10-06 11:29:44.775716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.215 [2024-10-06 11:29:44.775961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.215 [2024-10-06 11:29:44.775980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.215 [2024-10-06 11:29:44.780460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.215 [2024-10-06 11:29:44.780732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.215 [2024-10-06 11:29:44.780752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.215 [2024-10-06 11:29:44.785633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.215 [2024-10-06 11:29:44.785871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.215 [2024-10-06 11:29:44.785891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.474 [2024-10-06 11:29:44.789822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.474 [2024-10-06 11:29:44.790072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.790092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.794680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.794914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.794933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.798795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.799030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.799049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.803476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.803711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.803731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.807900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.808158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.808178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.812012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.812264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.812284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.816528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.816776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.816796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.820963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.821236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.821255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.825678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.825913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.825937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.829933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.830173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.830192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.834118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.834384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.834403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.838348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.838582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.838602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.842818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.843053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.843078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.847175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.847421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.847441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.851785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.852034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.852053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.855889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.856139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.856159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.860017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.860264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.860284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.864144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.864384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.864403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.868960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.869209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.869229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.873729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.873975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.873994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.878078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.878324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.878343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.882338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.882583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.882604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.886662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.886909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.886929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.890951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.891215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.891235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.895192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.895439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.895458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.899431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.899676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.475 [2024-10-06 11:29:44.899696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.475 [2024-10-06 11:29:44.903677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.475 [2024-10-06 11:29:44.903923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.903942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.907935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.908195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.908213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.912194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.912436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.912456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.916453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.916692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.916712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.920697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.920939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.920959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.924921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.925166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.925185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.929196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.929440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.929460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.933446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.933690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.933710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.937728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.937971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.937994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.941986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.942234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.942253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.946234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.946470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.946492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.950451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.950689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.950708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.954611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.954846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.954865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.958793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.959031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.959051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.963015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.963279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.963299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.967201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.967447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.967467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.971383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.971631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.971651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.975590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.975830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.975850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.979780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.980011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.980030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.983953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.984192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.984211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.988115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.988352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.988372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.992275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.992513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.992532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:44.996493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:44.996727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:44.996748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:45.000670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:45.000904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:45.000922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:45.004864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:45.005109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:45.005129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:45.009042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.476 [2024-10-06 11:29:45.009286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.476 [2024-10-06 11:29:45.009306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.476 [2024-10-06 11:29:45.013259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.477 [2024-10-06 11:29:45.013498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.477 [2024-10-06 11:29:45.013518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.477 [2024-10-06 11:29:45.017513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.477 [2024-10-06 11:29:45.017749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.477 [2024-10-06 11:29:45.017768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.477 [2024-10-06 11:29:45.021703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.477 [2024-10-06 11:29:45.021939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.477 [2024-10-06 11:29:45.021959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.477 [2024-10-06 11:29:45.025887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.477 [2024-10-06 11:29:45.026130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.477 [2024-10-06 11:29:45.026149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.477 [2024-10-06 11:29:45.030037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.477 [2024-10-06 11:29:45.030280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.477 [2024-10-06 11:29:45.030299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.477 [2024-10-06 11:29:45.034231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.477 [2024-10-06 11:29:45.034469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.477 [2024-10-06 11:29:45.034488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.477 [2024-10-06 11:29:45.038407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.477 [2024-10-06 11:29:45.038646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.477 [2024-10-06 11:29:45.038665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.477 [2024-10-06 11:29:45.042555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.477 [2024-10-06 11:29:45.042794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.477 [2024-10-06 11:29:45.042814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.477 [2024-10-06 11:29:45.046796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.477 [2024-10-06 11:29:45.047036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.477 [2024-10-06 11:29:45.047066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.051012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.051248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.051268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.055283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.055526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.055545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.059500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.059738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.059758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.063713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.063949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.063967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.067907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.068165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.068184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.072103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.072348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.072368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.076311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.076552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.076572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.080497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.080742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.080761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.084727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.084975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.084995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.088962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.089212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.089232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.093127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.093380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.093399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.097304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.097539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.097559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.101419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.101657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.101677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.105573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.105809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.105829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.109716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.109950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.109970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.114294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.114530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.114549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.118811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.119049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.119074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.124740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.125086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.125105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.131799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.132125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.132145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.138493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.738 [2024-10-06 11:29:45.138804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.738 [2024-10-06 11:29:45.138823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.738 [2024-10-06 11:29:45.145900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.146207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.146227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.153168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.153500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.153519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.160860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.161209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.161229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.168144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.168484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.168504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.176122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.176460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.176479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.184217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.184514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.184534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.191836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.192166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.192186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.199593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.199985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.200004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.207924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.208345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.208364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.216449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.216781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.216802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.225259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.225506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.225526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.232444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.232764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.232784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.240263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.240581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.240600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.248087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.248424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.248443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.256594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.256875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.256894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.264562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.264807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.264826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.271827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.272105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.272124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.279573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.279807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.279827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.286904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.287134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.287154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.293215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.293460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.293479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.299716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.299920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.299946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.739 [2024-10-06 11:29:45.306680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:47.739 [2024-10-06 11:29:45.307012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.739 [2024-10-06 11:29:45.307031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.314508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.314729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.314753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.321327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.321613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.321633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.327895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.328108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.328127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.333389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.333599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.333619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.338524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.338729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.338748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.343113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.343311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.343330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.347295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.347496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.347514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.351489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.351687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.351706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.355668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.355865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.355884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.359842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.360043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.360068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.363929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.364145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.364164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.367995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.368215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.368234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.372107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.372312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.372332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.376197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.376404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.376423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.380274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.380479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.380498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.384311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.384515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.384534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.388403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.388603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.388623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.001 6015.00 IOPS, 751.88 MiB/s [2024-10-06 11:29:45.393695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.393856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.393874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.397995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.398162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.398181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.401906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.402073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.402091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.405704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.405863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.405883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.409470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.409632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.001 [2024-10-06 11:29:45.409652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.001 [2024-10-06 11:29:45.413217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.001 [2024-10-06 11:29:45.413377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.413397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.417149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.417349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.417369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.421457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.421622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.421640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.425488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.425692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.425709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.430628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.430872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.430895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.436015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.436263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.436283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.441780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.442028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.442048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.447751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.448000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.448020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.453775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.454007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.454027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.460630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.460898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.460919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.466814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.467091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.467110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.473344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.473560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.473579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.480081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.480341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.480360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.487623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.487918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.487937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.495517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.495755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.495774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.503081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.503326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.503346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.511292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.511497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.511516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.518980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.519202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.519221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.524362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.524605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.524624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.529285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.529461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.529479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.534111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.534280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.534298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.538537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.538799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.538822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.543047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.543229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.543247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.547160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.547322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.547340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.550974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.551137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.551156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.554704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.554872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.554890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.002 [2024-10-06 11:29:45.558464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.002 [2024-10-06 11:29:45.558621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.002 [2024-10-06 11:29:45.558639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.003 [2024-10-06 11:29:45.562187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.003 [2024-10-06 11:29:45.562364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.003 [2024-10-06 11:29:45.562382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.003 [2024-10-06 11:29:45.565886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.003 [2024-10-06 11:29:45.566053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.003 [2024-10-06 11:29:45.566079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.003 [2024-10-06 11:29:45.569662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.003 [2024-10-06 11:29:45.569824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.003 [2024-10-06 11:29:45.569842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.003 [2024-10-06 11:29:45.573440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.003 [2024-10-06 11:29:45.573606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.003 [2024-10-06 11:29:45.573625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.577457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.577672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.577692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.582409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.582693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.582712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.587882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.588161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.588180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.593343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.593600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.593620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.599116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.599317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.599336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.604825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.605018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.605042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.611537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.611793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.611813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.619119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.619306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.619325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.626432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.626681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.626701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.634097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.634267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.634285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.641619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.641837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.641857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.647652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.647958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.647977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.654336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.654580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.654600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.659406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.659573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.659591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.663549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.663725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.663744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.667856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.668021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.668039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.673116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.673279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.673305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.677866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.678043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.678067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.682604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.682788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.682806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.687519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.687703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.687722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.692765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.692931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.692950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.697691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.697879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.697898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.703343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.703514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.703533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.708764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.708924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.264 [2024-10-06 11:29:45.708943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.264 [2024-10-06 11:29:45.713930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.264 [2024-10-06 11:29:45.714108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.714126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.718926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.719121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.719140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.723384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.723544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.723562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.727616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.727777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.727795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.731749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.731920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.731939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.736289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.736445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.736463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.740875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.741088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.741114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.746042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.746226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.746244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.749916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.750082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.750116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.753694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.753882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.753900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.757471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.757630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.757649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.761664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.761823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.761841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.766341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.766529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.766548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.771455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.771613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.771630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.775999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.776173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.776192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.781088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.781256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.781275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.786077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.786258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.786277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.791610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.791777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.791795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.796924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.797090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.797112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.801292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.801455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.801473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.805315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.805480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.805499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.809266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.809429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.809448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.813195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.813358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.813376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.817328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.817496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.817515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.821378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.821546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.821566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.825425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.825595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.825614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.265 [2024-10-06 11:29:45.829646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.265 [2024-10-06 11:29:45.829827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.265 [2024-10-06 11:29:45.829847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.266 [2024-10-06 11:29:45.833822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.266 [2024-10-06 11:29:45.834002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.266 [2024-10-06 11:29:45.834022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.526 [2024-10-06 11:29:45.838043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.526 [2024-10-06 11:29:45.838231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.526 [2024-10-06 11:29:45.838251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.526 [2024-10-06 11:29:45.842198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.526 [2024-10-06 11:29:45.842376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.526 [2024-10-06 11:29:45.842396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.526 [2024-10-06 11:29:45.846354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.526 [2024-10-06 11:29:45.846524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.526 [2024-10-06 11:29:45.846543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.526 [2024-10-06 11:29:45.850384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.526 [2024-10-06 11:29:45.850551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.526 [2024-10-06 11:29:45.850570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.526 [2024-10-06 11:29:45.854440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.526 [2024-10-06 11:29:45.854613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.526 [2024-10-06 11:29:45.854632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.526 [2024-10-06 11:29:45.858491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.526 [2024-10-06 11:29:45.858654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.526 [2024-10-06 11:29:45.858672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.526 [2024-10-06 11:29:45.862473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.526 [2024-10-06 11:29:45.862637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.526 [2024-10-06 11:29:45.862655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.526 [2024-10-06 11:29:45.866470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.526 [2024-10-06 11:29:45.866652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.526 [2024-10-06 11:29:45.866670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.526 [2024-10-06 11:29:45.870364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.870535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.870554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.874408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.874570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.874588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.878315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.878481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.878500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.882158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.882328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.882347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.886020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.886199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.886218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.889882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.890044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.890068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.893828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.893988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.894006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.897812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.897977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.897995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.902033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.902232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.902255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.906029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.906268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.906288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.910637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.910898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.910918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.916566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.916853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.916873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.922965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.923207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.923227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.929511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.929715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.929734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.936815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.937102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.937121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.944597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.944822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.944841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.952670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.952910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.952929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.960675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.960946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.960966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.968579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.968792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.968811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.976393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.976632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.976653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.984374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.984614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.984633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:45.992925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:45.993131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:45.993149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:46.000515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:46.000676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:46.000694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:46.007374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:46.007672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:46.007691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:46.014726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:46.014989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:46.015008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:46.022483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:46.022748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:46.022778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:46.030462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:46.030662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:46.030681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:46.038011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:46.038291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:46.038311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:46.046435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:46.046699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:46.046719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:46.054308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:46.054585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:46.054605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:46.062038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:46.062252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:46.062272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:46.069726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:46.070018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:46.070037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:46.077917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:46.078279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:46.078299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:46.086103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:46.086349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:46.086369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:46.093376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:46.093626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:46.093646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.527 [2024-10-06 11:29:46.100515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.527 [2024-10-06 11:29:46.100779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.527 [2024-10-06 11:29:46.100799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.788 [2024-10-06 11:29:46.107936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.788 [2024-10-06 11:29:46.108245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.788 [2024-10-06 11:29:46.108265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.788 [2024-10-06 11:29:46.114801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.788 [2024-10-06 11:29:46.115066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.788 [2024-10-06 11:29:46.115086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.788 [2024-10-06 11:29:46.122212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.788 [2024-10-06 11:29:46.122497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.788 [2024-10-06 11:29:46.122518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.788 [2024-10-06 11:29:46.128933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.788 [2024-10-06 11:29:46.129142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.788 [2024-10-06 11:29:46.129160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.134224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.134431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.134451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.138722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.138946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.138965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.143194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.143364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.143383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.147005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.147191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.147210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.150881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.151041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.151064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.155111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.155322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.155348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.160165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.160428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.160447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.165492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.165763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.165783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.171142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.171400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.171425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.176969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.177226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.177245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.183365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.183639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.183658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.190228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.190445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.190468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.197729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.198029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.198048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.205030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.205280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.205299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.212255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.212501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.212520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.219770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.219994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.220014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.227183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.227443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.227463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.234501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.234744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.234763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.241806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.242011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.242030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.249428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.249613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.249631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.256310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.256591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.256610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.264290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.264485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.264504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.272017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.272217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.272235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.280346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.280540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.280559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.288072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.288328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.288348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.789 [2024-10-06 11:29:46.295418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.789 [2024-10-06 11:29:46.295715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.789 [2024-10-06 11:29:46.295735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.790 [2024-10-06 11:29:46.302743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.790 [2024-10-06 11:29:46.302977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.790 [2024-10-06 11:29:46.302997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.790 [2024-10-06 11:29:46.309761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.790 [2024-10-06 11:29:46.309965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.790 [2024-10-06 11:29:46.309985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.790 [2024-10-06 11:29:46.315238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.790 [2024-10-06 11:29:46.315439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.790 [2024-10-06 11:29:46.315459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.790 [2024-10-06 11:29:46.319488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.790 [2024-10-06 11:29:46.319675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.790 [2024-10-06 11:29:46.319694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.790 [2024-10-06 11:29:46.323583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.790 [2024-10-06 11:29:46.323743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.790 [2024-10-06 11:29:46.323761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.790 [2024-10-06 11:29:46.327492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.790 [2024-10-06 11:29:46.327653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.790 [2024-10-06 11:29:46.327672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.790 [2024-10-06 11:29:46.331389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.790 [2024-10-06 11:29:46.331547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.790 [2024-10-06 11:29:46.331565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.790 [2024-10-06 11:29:46.335431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.790 [2024-10-06 11:29:46.335589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.790 [2024-10-06 11:29:46.335607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.790 [2024-10-06 11:29:46.339334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.790 [2024-10-06 11:29:46.339492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.790 [2024-10-06 11:29:46.339510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.790 [2024-10-06 11:29:46.343134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.790 [2024-10-06 11:29:46.343293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.790 [2024-10-06 11:29:46.343311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.790 [2024-10-06 11:29:46.346872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.790 [2024-10-06 11:29:46.347031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.790 [2024-10-06 11:29:46.347048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.790 [2024-10-06 11:29:46.350606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.790 [2024-10-06 11:29:46.350765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.790 [2024-10-06 11:29:46.350787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.790 [2024-10-06 11:29:46.354362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.790 [2024-10-06 11:29:46.354521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.790 [2024-10-06 11:29:46.354539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.790 [2024-10-06 11:29:46.358132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.790 [2024-10-06 11:29:46.358293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.790 [2024-10-06 11:29:46.358311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.790 [2024-10-06 11:29:46.361961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:48.790 [2024-10-06 11:29:46.362130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.790 [2024-10-06 11:29:46.362148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.050 [2024-10-06 11:29:46.365712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:49.050 [2024-10-06 11:29:46.365879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.050 [2024-10-06 11:29:46.365897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.050 [2024-10-06 11:29:46.370017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:49.050 [2024-10-06 11:29:46.370186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.050 [2024-10-06 11:29:46.370205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.050 [2024-10-06 11:29:46.375633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:49.050 [2024-10-06 11:29:46.375790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.050 [2024-10-06 11:29:46.375808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.050 [2024-10-06 11:29:46.380298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:49.050 [2024-10-06 11:29:46.380456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.050 [2024-10-06 11:29:46.380474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.050 [2024-10-06 11:29:46.384541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:49.050 [2024-10-06 11:29:46.384710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.050 [2024-10-06 11:29:46.384728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.050 [2024-10-06 11:29:46.388789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f3cc0) with pdu=0x2000198fef90 00:34:49.050 [2024-10-06 11:29:46.388949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.050 [2024-10-06 11:29:46.388966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.050 5850.00 IOPS, 731.25 MiB/s 00:34:49.050 Latency(us) 00:34:49.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.050 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:49.050 nvme0n1 : 2.00 5848.96 731.12 0.00 0.00 2731.60 1786.64 11297.16 00:34:49.050 =================================================================================================================== 00:34:49.050 Total : 5848.96 731.12 0.00 0.00 2731.60 1786.64 11297.16 00:34:49.050 { 00:34:49.050 "results": [ 00:34:49.050 { 00:34:49.050 "job": "nvme0n1", 00:34:49.050 "core_mask": "0x2", 00:34:49.050 "workload": "randwrite", 00:34:49.050 "status": "finished", 00:34:49.050 "queue_depth": 16, 00:34:49.050 "io_size": 131072, 00:34:49.050 "runtime": 2.002921, 00:34:49.050 "iops": 5848.957597428955, 00:34:49.050 "mibps": 731.1196996786193, 00:34:49.050 "io_failed": 0, 00:34:49.050 "io_timeout": 0, 00:34:49.050 "avg_latency_us": 2731.603028433226, 00:34:49.050 "min_latency_us": 1786.6361904761904, 00:34:49.050 "max_latency_us": 11297.158095238095 00:34:49.050 } 00:34:49.050 ], 00:34:49.050 "core_count": 1 00:34:49.050 } 00:34:49.050 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:49.050 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:49.050 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:49.050 | .driver_specific 00:34:49.050 | .nvme_error 00:34:49.050 | .status_code 00:34:49.050 | .command_transient_transport_error' 00:34:49.050 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:49.050 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 377 > 0 )) 00:34:49.050 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2261451 00:34:49.050 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2261451 ']' 00:34:49.050 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2261451 00:34:49.050 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:34:49.050 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:49.050 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2261451 00:34:49.310 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:49.310 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:49.310 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2261451' 00:34:49.310 killing process with pid 2261451 00:34:49.310 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2261451 00:34:49.310 Received shutdown signal, test time was about 2.000000 seconds 00:34:49.310 00:34:49.310 Latency(us) 00:34:49.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.310 =================================================================================================================== 00:34:49.310 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:49.310 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2261451 00:34:49.310 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2259791 00:34:49.310 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2259791 ']' 00:34:49.310 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2259791 00:34:49.310 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:34:49.310 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:49.310 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2259791 00:34:49.571 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:49.571 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:49.571 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2259791' 00:34:49.571 killing process with pid 2259791 00:34:49.571 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2259791 00:34:49.571 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2259791 00:34:49.571 00:34:49.571 real 0m13.854s 00:34:49.571 user 0m26.587s 00:34:49.571 sys 0m4.379s 00:34:49.571 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:49.571 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:49.571 ************************************ 00:34:49.571 END TEST nvmf_digest_error 00:34:49.571 ************************************ 00:34:49.571 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:34:49.571 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:34:49.571 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:49.571 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:34:49.571 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:49.571 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:34:49.571 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:49.571 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:49.571 rmmod nvme_tcp 00:34:49.571 rmmod nvme_fabrics 00:34:49.571 rmmod nvme_keyring 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 2259791 ']' 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 2259791 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 2259791 ']' 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 2259791 00:34:49.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2259791) - No such process 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 2259791 is not found' 00:34:49.831 Process with pid 2259791 is not found 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:49.831 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.737 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:51.737 00:34:51.737 real 0m35.236s 00:34:51.737 user 0m54.625s 00:34:51.737 sys 0m12.777s 00:34:51.737 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:51.737 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:51.737 ************************************ 00:34:51.737 END TEST nvmf_digest 00:34:51.737 ************************************ 00:34:51.737 11:29:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:34:51.737 11:29:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:34:51.737 11:29:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:34:51.737 11:29:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:51.737 11:29:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:51.737 11:29:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:51.737 11:29:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.996 ************************************ 00:34:51.996 START TEST nvmf_bdevperf 00:34:51.996 ************************************ 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:51.996 * Looking for test storage... 00:34:51.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:34:51.996 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:51.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.997 --rc genhtml_branch_coverage=1 00:34:51.997 --rc genhtml_function_coverage=1 00:34:51.997 --rc genhtml_legend=1 00:34:51.997 --rc geninfo_all_blocks=1 00:34:51.997 --rc geninfo_unexecuted_blocks=1 00:34:51.997 00:34:51.997 ' 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:51.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.997 --rc genhtml_branch_coverage=1 00:34:51.997 --rc genhtml_function_coverage=1 00:34:51.997 --rc genhtml_legend=1 00:34:51.997 --rc geninfo_all_blocks=1 00:34:51.997 --rc geninfo_unexecuted_blocks=1 00:34:51.997 00:34:51.997 ' 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:51.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.997 --rc genhtml_branch_coverage=1 00:34:51.997 --rc genhtml_function_coverage=1 00:34:51.997 --rc genhtml_legend=1 00:34:51.997 --rc geninfo_all_blocks=1 00:34:51.997 --rc geninfo_unexecuted_blocks=1 00:34:51.997 00:34:51.997 ' 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:51.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.997 --rc genhtml_branch_coverage=1 00:34:51.997 --rc genhtml_function_coverage=1 00:34:51.997 --rc genhtml_legend=1 00:34:51.997 --rc geninfo_all_blocks=1 00:34:51.997 --rc geninfo_unexecuted_blocks=1 00:34:51.997 00:34:51.997 ' 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:51.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:34:51.997 11:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:57.270 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:57.271 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:57.271 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:57.271 Found net devices under 0000:af:00.0: cvl_0_0 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:57.271 Found net devices under 0000:af:00.1: cvl_0_1 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:57.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:57.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:34:57.271 00:34:57.271 --- 10.0.0.2 ping statistics --- 00:34:57.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:57.271 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:34:57.271 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:57.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:57.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:34:57.531 00:34:57.531 --- 10.0.0.1 ping statistics --- 00:34:57.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:57.531 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=2265381 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 2265381 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2265381 ']' 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:57.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:57.531 11:29:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:57.531 [2024-10-06 11:29:54.943780] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:34:57.532 [2024-10-06 11:29:54.943825] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:57.532 [2024-10-06 11:29:55.000967] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:57.532 [2024-10-06 11:29:55.040312] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:57.532 [2024-10-06 11:29:55.040352] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:57.532 [2024-10-06 11:29:55.040359] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:57.532 [2024-10-06 11:29:55.040365] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:57.532 [2024-10-06 11:29:55.040370] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:57.532 [2024-10-06 11:29:55.041321] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:34:57.532 [2024-10-06 11:29:55.041407] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:34:57.532 [2024-10-06 11:29:55.041408] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:57.791 [2024-10-06 11:29:55.171020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:57.791 Malloc0 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:57.791 [2024-10-06 11:29:55.238326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:57.791 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:57.791 { 00:34:57.791 "params": { 00:34:57.791 "name": "Nvme$subsystem", 00:34:57.791 "trtype": "$TEST_TRANSPORT", 00:34:57.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:57.791 "adrfam": "ipv4", 00:34:57.791 "trsvcid": "$NVMF_PORT", 00:34:57.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:57.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:57.792 "hdgst": ${hdgst:-false}, 00:34:57.792 "ddgst": ${ddgst:-false} 00:34:57.792 }, 00:34:57.792 "method": "bdev_nvme_attach_controller" 00:34:57.792 } 00:34:57.792 EOF 00:34:57.792 )") 00:34:57.792 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:34:57.792 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:34:57.792 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:34:57.792 11:29:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:57.792 "params": { 00:34:57.792 "name": "Nvme1", 00:34:57.792 "trtype": "tcp", 00:34:57.792 "traddr": "10.0.0.2", 00:34:57.792 "adrfam": "ipv4", 00:34:57.792 "trsvcid": "4420", 00:34:57.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:57.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:57.792 "hdgst": false, 00:34:57.792 "ddgst": false 00:34:57.792 }, 00:34:57.792 "method": "bdev_nvme_attach_controller" 00:34:57.792 }' 00:34:57.792 [2024-10-06 11:29:55.287021] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:34:57.792 [2024-10-06 11:29:55.287069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265414 ] 00:34:57.792 [2024-10-06 11:29:55.341811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.051 [2024-10-06 11:29:55.380900] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:58.310 Running I/O for 1 seconds... 00:34:59.247 11112.00 IOPS, 43.41 MiB/s 00:34:59.248 Latency(us) 00:34:59.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.248 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:59.248 Verification LBA range: start 0x0 length 0x4000 00:34:59.248 Nvme1n1 : 1.01 11159.72 43.59 0.00 0.00 11427.97 1786.64 12545.46 00:34:59.248 =================================================================================================================== 00:34:59.248 Total : 11159.72 43.59 0.00 0.00 11427.97 1786.64 12545.46 00:34:59.507 11:29:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2265639 00:34:59.507 11:29:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:59.507 11:29:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:59.507 11:29:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:59.507 11:29:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:34:59.507 11:29:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:34:59.507 11:29:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:59.507 11:29:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:59.507 { 00:34:59.507 "params": { 00:34:59.507 "name": "Nvme$subsystem", 00:34:59.507 "trtype": "$TEST_TRANSPORT", 00:34:59.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:59.507 "adrfam": "ipv4", 00:34:59.507 "trsvcid": "$NVMF_PORT", 00:34:59.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:59.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:59.507 "hdgst": ${hdgst:-false}, 00:34:59.507 "ddgst": ${ddgst:-false} 00:34:59.507 }, 00:34:59.507 "method": "bdev_nvme_attach_controller" 00:34:59.507 } 00:34:59.507 EOF 00:34:59.507 )") 00:34:59.507 11:29:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:34:59.507 11:29:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:34:59.507 11:29:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:34:59.507 11:29:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:59.507 "params": { 00:34:59.507 "name": "Nvme1", 00:34:59.507 "trtype": "tcp", 00:34:59.507 "traddr": "10.0.0.2", 00:34:59.507 "adrfam": "ipv4", 00:34:59.507 "trsvcid": "4420", 00:34:59.507 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:59.507 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:59.507 "hdgst": false, 00:34:59.507 "ddgst": false 00:34:59.507 }, 00:34:59.507 "method": "bdev_nvme_attach_controller" 00:34:59.507 }' 00:34:59.507 [2024-10-06 11:29:56.879712] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:34:59.507 [2024-10-06 11:29:56.879760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265639 ] 00:34:59.507 [2024-10-06 11:29:56.933964] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.507 [2024-10-06 11:29:56.970632] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:59.767 Running I/O for 15 seconds... 00:35:02.347 11148.00 IOPS, 43.55 MiB/s 11321.50 IOPS, 44.22 MiB/s 11:29:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2265381 00:35:02.347 11:29:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:02.347 [2024-10-06 11:29:59.848397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.347 [2024-10-06 11:29:59.848437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.347 [2024-10-06 11:29:59.848458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.347 [2024-10-06 11:29:59.848467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.347 [2024-10-06 11:29:59.848476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.347 [2024-10-06 11:29:59.848483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.347 [2024-10-06 11:29:59.848492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.347 [2024-10-06 11:29:59.848499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.347 [2024-10-06 11:29:59.848508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.848990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.848997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.849004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.849011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.849017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.849025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.849032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.849041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.849047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.849055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.849067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.849075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.348 [2024-10-06 11:29:59.849082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.348 [2024-10-06 11:29:59.849090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.349 [2024-10-06 11:29:59.849167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.349 [2024-10-06 11:29:59.849181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.349 [2024-10-06 11:29:59.849196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.349 [2024-10-06 11:29:59.849210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.349 [2024-10-06 11:29:59.849226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.349 [2024-10-06 11:29:59.849242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.349 [2024-10-06 11:29:59.849256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.349 [2024-10-06 11:29:59.849624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.349 [2024-10-06 11:29:59.849632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.849988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.849996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.850002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.850010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.850016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.850024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.850030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.850037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.850043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.850052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.850061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.850069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.350 [2024-10-06 11:29:59.850076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.850083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.350 [2024-10-06 11:29:59.850090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.850097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.350 [2024-10-06 11:29:59.850103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.850113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.350 [2024-10-06 11:29:59.850119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.850127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.350 [2024-10-06 11:29:59.850133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.850142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.350 [2024-10-06 11:29:59.850149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.850156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.350 [2024-10-06 11:29:59.850163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.850170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.350 [2024-10-06 11:29:59.850177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.350 [2024-10-06 11:29:59.850185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.350 [2024-10-06 11:29:59.850192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.351 [2024-10-06 11:29:59.850200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.351 [2024-10-06 11:29:59.850206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.351 [2024-10-06 11:29:59.850214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.351 [2024-10-06 11:29:59.850220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.351 [2024-10-06 11:29:59.850228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.351 [2024-10-06 11:29:59.850234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.351 [2024-10-06 11:29:59.850242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.351 [2024-10-06 11:29:59.850248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.351 [2024-10-06 11:29:59.850256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.351 [2024-10-06 11:29:59.850263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.351 [2024-10-06 11:29:59.850270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.351 [2024-10-06 11:29:59.850277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.351 [2024-10-06 11:29:59.850285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.351 [2024-10-06 11:29:59.850291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.351 [2024-10-06 11:29:59.850299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.351 [2024-10-06 11:29:59.850305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.351 [2024-10-06 11:29:59.850312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23701b0 is same with the state(6) to be set 00:35:02.351 [2024-10-06 11:29:59.850321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:02.351 [2024-10-06 11:29:59.850327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:02.351 [2024-10-06 11:29:59.850332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97128 len:8 PRP1 0x0 PRP2 0x0 00:35:02.351 [2024-10-06 11:29:59.850340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.351 [2024-10-06 11:29:59.850381] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23701b0 was disconnected and freed. reset controller. 00:35:02.351 [2024-10-06 11:29:59.853118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.351 [2024-10-06 11:29:59.853170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.351 [2024-10-06 11:29:59.853731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.351 [2024-10-06 11:29:59.853747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.351 [2024-10-06 11:29:59.853755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.351 [2024-10-06 11:29:59.853927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.351 [2024-10-06 11:29:59.854104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.351 [2024-10-06 11:29:59.854112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.351 [2024-10-06 11:29:59.854120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.351 [2024-10-06 11:29:59.856852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.351 [2024-10-06 11:29:59.866162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.351 [2024-10-06 11:29:59.866643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.351 [2024-10-06 11:29:59.866660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.351 [2024-10-06 11:29:59.866667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.351 [2024-10-06 11:29:59.866834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.351 [2024-10-06 11:29:59.867002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.351 [2024-10-06 11:29:59.867009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.351 [2024-10-06 11:29:59.867016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.351 [2024-10-06 11:29:59.869620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.351 [2024-10-06 11:29:59.879005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.351 [2024-10-06 11:29:59.879456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.351 [2024-10-06 11:29:59.879502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.351 [2024-10-06 11:29:59.879525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.351 [2024-10-06 11:29:59.880022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.351 [2024-10-06 11:29:59.880208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.351 [2024-10-06 11:29:59.880220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.351 [2024-10-06 11:29:59.880226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.351 [2024-10-06 11:29:59.882823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.351 [2024-10-06 11:29:59.891844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.351 [2024-10-06 11:29:59.892313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.351 [2024-10-06 11:29:59.892360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.351 [2024-10-06 11:29:59.892384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.351 [2024-10-06 11:29:59.892961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.351 [2024-10-06 11:29:59.893478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.351 [2024-10-06 11:29:59.893486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.351 [2024-10-06 11:29:59.893493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.351 [2024-10-06 11:29:59.896092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.351 [2024-10-06 11:29:59.904695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.351 [2024-10-06 11:29:59.905192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.351 [2024-10-06 11:29:59.905237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.351 [2024-10-06 11:29:59.905259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.351 [2024-10-06 11:29:59.905837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.351 [2024-10-06 11:29:59.906410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.351 [2024-10-06 11:29:59.906418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.351 [2024-10-06 11:29:59.906424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.351 [2024-10-06 11:29:59.909017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.351 [2024-10-06 11:29:59.917551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.351 [2024-10-06 11:29:59.918036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.351 [2024-10-06 11:29:59.918052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.351 [2024-10-06 11:29:59.918066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.351 [2024-10-06 11:29:59.918252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.351 [2024-10-06 11:29:59.918427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.351 [2024-10-06 11:29:59.918435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.351 [2024-10-06 11:29:59.918442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.612 [2024-10-06 11:29:59.921133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.613 [2024-10-06 11:29:59.930474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.613 [2024-10-06 11:29:59.930916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.613 [2024-10-06 11:29:59.930932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.613 [2024-10-06 11:29:59.930939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.613 [2024-10-06 11:29:59.931111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.613 [2024-10-06 11:29:59.931279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.613 [2024-10-06 11:29:59.931287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.613 [2024-10-06 11:29:59.931293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.613 [2024-10-06 11:29:59.933884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.613 [2024-10-06 11:29:59.943220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.613 [2024-10-06 11:29:59.943663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.613 [2024-10-06 11:29:59.943707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.613 [2024-10-06 11:29:59.943730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.613 [2024-10-06 11:29:59.944320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.613 [2024-10-06 11:29:59.944685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.613 [2024-10-06 11:29:59.944693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.613 [2024-10-06 11:29:59.944699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.613 [2024-10-06 11:29:59.947297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.613 [2024-10-06 11:29:59.956010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.613 [2024-10-06 11:29:59.956479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.613 [2024-10-06 11:29:59.956495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.613 [2024-10-06 11:29:59.956502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.613 [2024-10-06 11:29:59.956660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.613 [2024-10-06 11:29:59.956817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.613 [2024-10-06 11:29:59.956824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.613 [2024-10-06 11:29:59.956830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.613 [2024-10-06 11:29:59.959440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.613 [2024-10-06 11:29:59.968812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.613 [2024-10-06 11:29:59.969269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.613 [2024-10-06 11:29:59.969285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.613 [2024-10-06 11:29:59.969292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.613 [2024-10-06 11:29:59.969453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.613 [2024-10-06 11:29:59.969610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.613 [2024-10-06 11:29:59.969618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.613 [2024-10-06 11:29:59.969623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.613 [2024-10-06 11:29:59.972216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.613 [2024-10-06 11:29:59.981557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.613 [2024-10-06 11:29:59.982036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.613 [2024-10-06 11:29:59.982053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.613 [2024-10-06 11:29:59.982066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.613 [2024-10-06 11:29:59.982233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.613 [2024-10-06 11:29:59.982399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.613 [2024-10-06 11:29:59.982407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.613 [2024-10-06 11:29:59.982413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.613 [2024-10-06 11:29:59.985011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.613 [2024-10-06 11:29:59.994316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.613 [2024-10-06 11:29:59.994785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.613 [2024-10-06 11:29:59.994832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.613 [2024-10-06 11:29:59.994855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.613 [2024-10-06 11:29:59.995457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.613 [2024-10-06 11:29:59.995735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.613 [2024-10-06 11:29:59.995748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.613 [2024-10-06 11:29:59.995758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.613 [2024-10-06 11:30:00.000187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.613 [2024-10-06 11:30:00.007900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.613 [2024-10-06 11:30:00.011550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.613 [2024-10-06 11:30:00.011581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.613 [2024-10-06 11:30:00.011593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.613 [2024-10-06 11:30:00.011811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.613 [2024-10-06 11:30:00.012021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.613 [2024-10-06 11:30:00.012031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.613 [2024-10-06 11:30:00.012043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.613 [2024-10-06 11:30:00.015082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.613 [2024-10-06 11:30:00.020941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.613 [2024-10-06 11:30:00.021421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.613 [2024-10-06 11:30:00.021438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.613 [2024-10-06 11:30:00.021446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.613 [2024-10-06 11:30:00.021618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.613 [2024-10-06 11:30:00.021790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.613 [2024-10-06 11:30:00.021798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.613 [2024-10-06 11:30:00.021805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.613 [2024-10-06 11:30:00.024545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.613 [2024-10-06 11:30:00.033914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.613 [2024-10-06 11:30:00.034301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.613 [2024-10-06 11:30:00.034318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.613 [2024-10-06 11:30:00.034326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.613 [2024-10-06 11:30:00.034498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.613 [2024-10-06 11:30:00.034670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.613 [2024-10-06 11:30:00.034679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.613 [2024-10-06 11:30:00.034685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.613 [2024-10-06 11:30:00.037871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.613 [2024-10-06 11:30:00.046787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.613 [2024-10-06 11:30:00.047169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.613 [2024-10-06 11:30:00.047187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.613 [2024-10-06 11:30:00.047195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.614 [2024-10-06 11:30:00.047362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.614 [2024-10-06 11:30:00.047535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.614 [2024-10-06 11:30:00.047545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.614 [2024-10-06 11:30:00.047552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.614 [2024-10-06 11:30:00.050220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.614 [2024-10-06 11:30:00.059789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.614 [2024-10-06 11:30:00.060179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.614 [2024-10-06 11:30:00.060197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.614 [2024-10-06 11:30:00.060205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.614 [2024-10-06 11:30:00.060377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.614 [2024-10-06 11:30:00.060582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.614 [2024-10-06 11:30:00.060590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.614 [2024-10-06 11:30:00.060597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.614 [2024-10-06 11:30:00.063326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.614 [2024-10-06 11:30:00.072744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.614 [2024-10-06 11:30:00.073161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.614 [2024-10-06 11:30:00.073178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.614 [2024-10-06 11:30:00.073186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.614 [2024-10-06 11:30:00.073358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.614 [2024-10-06 11:30:00.073531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.614 [2024-10-06 11:30:00.073539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.614 [2024-10-06 11:30:00.073546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.614 [2024-10-06 11:30:00.076282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.614 [2024-10-06 11:30:00.086497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.614 [2024-10-06 11:30:00.086935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.614 [2024-10-06 11:30:00.086953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.614 [2024-10-06 11:30:00.086961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.614 [2024-10-06 11:30:00.087176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.614 [2024-10-06 11:30:00.087385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.614 [2024-10-06 11:30:00.087395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.614 [2024-10-06 11:30:00.087402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.614 [2024-10-06 11:30:00.090182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.614 [2024-10-06 11:30:00.099546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.614 [2024-10-06 11:30:00.099928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.614 [2024-10-06 11:30:00.099945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.614 [2024-10-06 11:30:00.099952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.614 [2024-10-06 11:30:00.100129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.614 [2024-10-06 11:30:00.100305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.614 [2024-10-06 11:30:00.100313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.614 [2024-10-06 11:30:00.100319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.614 [2024-10-06 11:30:00.103051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.614 [2024-10-06 11:30:00.112533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.614 [2024-10-06 11:30:00.112930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.614 [2024-10-06 11:30:00.112947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.614 [2024-10-06 11:30:00.112955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.614 [2024-10-06 11:30:00.113132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.614 [2024-10-06 11:30:00.113304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.614 [2024-10-06 11:30:00.113313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.614 [2024-10-06 11:30:00.113319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.614 [2024-10-06 11:30:00.116050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.614 [2024-10-06 11:30:00.125575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.614 [2024-10-06 11:30:00.125977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.614 [2024-10-06 11:30:00.125993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.614 [2024-10-06 11:30:00.126001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.614 [2024-10-06 11:30:00.126178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.614 [2024-10-06 11:30:00.126350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.614 [2024-10-06 11:30:00.126358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.614 [2024-10-06 11:30:00.126364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.614 [2024-10-06 11:30:00.129101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.614 [2024-10-06 11:30:00.138625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.614 [2024-10-06 11:30:00.138981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.614 [2024-10-06 11:30:00.138997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.614 [2024-10-06 11:30:00.139004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.614 [2024-10-06 11:30:00.139180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.614 [2024-10-06 11:30:00.139351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.614 [2024-10-06 11:30:00.139360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.614 [2024-10-06 11:30:00.139366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.614 [2024-10-06 11:30:00.142108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.614 [2024-10-06 11:30:00.151631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.614 [2024-10-06 11:30:00.152031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.614 [2024-10-06 11:30:00.152049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.614 [2024-10-06 11:30:00.152056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.614 [2024-10-06 11:30:00.152233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.614 [2024-10-06 11:30:00.152405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.614 [2024-10-06 11:30:00.152414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.614 [2024-10-06 11:30:00.152420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.614 [2024-10-06 11:30:00.155158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.614 [2024-10-06 11:30:00.164706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.614 [2024-10-06 11:30:00.165095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.614 [2024-10-06 11:30:00.165113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.614 [2024-10-06 11:30:00.165120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.614 [2024-10-06 11:30:00.165292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.614 [2024-10-06 11:30:00.165462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.614 [2024-10-06 11:30:00.165471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.614 [2024-10-06 11:30:00.165477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.614 [2024-10-06 11:30:00.168212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.614 [2024-10-06 11:30:00.177738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.614 [2024-10-06 11:30:00.178086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.614 [2024-10-06 11:30:00.178103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.615 [2024-10-06 11:30:00.178110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.615 [2024-10-06 11:30:00.178281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.615 [2024-10-06 11:30:00.178452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.615 [2024-10-06 11:30:00.178461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.615 [2024-10-06 11:30:00.178467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.615 [2024-10-06 11:30:00.181235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.875 [2024-10-06 11:30:00.190780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.875 [2024-10-06 11:30:00.191177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.875 [2024-10-06 11:30:00.191198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.875 [2024-10-06 11:30:00.191206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.875 [2024-10-06 11:30:00.191378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.875 [2024-10-06 11:30:00.191550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.875 [2024-10-06 11:30:00.191559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.875 [2024-10-06 11:30:00.191565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.875 [2024-10-06 11:30:00.194374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.875 [2024-10-06 11:30:00.203756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.875 [2024-10-06 11:30:00.204200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.875 [2024-10-06 11:30:00.204217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.875 [2024-10-06 11:30:00.204225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.875 [2024-10-06 11:30:00.204396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.875 [2024-10-06 11:30:00.204568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.875 [2024-10-06 11:30:00.204576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.875 [2024-10-06 11:30:00.204583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.875 [2024-10-06 11:30:00.207321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.875 [2024-10-06 11:30:00.216879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.875 [2024-10-06 11:30:00.217275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.875 [2024-10-06 11:30:00.217293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.875 [2024-10-06 11:30:00.217301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.875 [2024-10-06 11:30:00.217484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.875 [2024-10-06 11:30:00.217666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.875 [2024-10-06 11:30:00.217675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.875 [2024-10-06 11:30:00.217682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.875 [2024-10-06 11:30:00.220583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.875 [2024-10-06 11:30:00.230113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.875 [2024-10-06 11:30:00.230586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.875 [2024-10-06 11:30:00.230603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.875 [2024-10-06 11:30:00.230610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.875 [2024-10-06 11:30:00.230792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.875 [2024-10-06 11:30:00.230978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.875 [2024-10-06 11:30:00.230987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.875 [2024-10-06 11:30:00.230993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.875 [2024-10-06 11:30:00.233880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.875 [2024-10-06 11:30:00.243206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.875 [2024-10-06 11:30:00.243578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.875 [2024-10-06 11:30:00.243595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.875 [2024-10-06 11:30:00.243602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.875 [2024-10-06 11:30:00.243773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.875 [2024-10-06 11:30:00.243945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.875 [2024-10-06 11:30:00.243953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.875 [2024-10-06 11:30:00.243959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.875 [2024-10-06 11:30:00.246694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.875 [2024-10-06 11:30:00.256228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.875 [2024-10-06 11:30:00.256606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.875 [2024-10-06 11:30:00.256623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.876 [2024-10-06 11:30:00.256630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.876 [2024-10-06 11:30:00.256801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.876 [2024-10-06 11:30:00.256972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.876 [2024-10-06 11:30:00.256980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.876 [2024-10-06 11:30:00.256987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.876 [2024-10-06 11:30:00.259726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.876 [2024-10-06 11:30:00.269263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.876 [2024-10-06 11:30:00.269631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.876 [2024-10-06 11:30:00.269648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.876 [2024-10-06 11:30:00.269656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.876 [2024-10-06 11:30:00.269827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.876 [2024-10-06 11:30:00.269998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.876 [2024-10-06 11:30:00.270006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.876 [2024-10-06 11:30:00.270012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.876 [2024-10-06 11:30:00.272750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.876 [2024-10-06 11:30:00.282286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.876 [2024-10-06 11:30:00.282660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.876 [2024-10-06 11:30:00.282677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.876 [2024-10-06 11:30:00.282684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.876 [2024-10-06 11:30:00.282856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.876 [2024-10-06 11:30:00.283031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.876 [2024-10-06 11:30:00.283040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.876 [2024-10-06 11:30:00.283046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.876 [2024-10-06 11:30:00.285796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.876 [2024-10-06 11:30:00.295347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.876 [2024-10-06 11:30:00.295670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.876 [2024-10-06 11:30:00.295686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.876 [2024-10-06 11:30:00.295694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.876 [2024-10-06 11:30:00.295864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.876 [2024-10-06 11:30:00.296037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.876 [2024-10-06 11:30:00.296045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.876 [2024-10-06 11:30:00.296051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.876 [2024-10-06 11:30:00.298791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.876 [2024-10-06 11:30:00.308328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.876 [2024-10-06 11:30:00.308743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.876 [2024-10-06 11:30:00.308759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.876 [2024-10-06 11:30:00.308766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.876 [2024-10-06 11:30:00.308937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.876 [2024-10-06 11:30:00.309114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.876 [2024-10-06 11:30:00.309123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.876 [2024-10-06 11:30:00.309129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.876 [2024-10-06 11:30:00.311862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.876 [2024-10-06 11:30:00.321406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.876 [2024-10-06 11:30:00.321781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.876 [2024-10-06 11:30:00.321797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.876 [2024-10-06 11:30:00.321808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.876 [2024-10-06 11:30:00.321979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.876 [2024-10-06 11:30:00.322159] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.876 [2024-10-06 11:30:00.322167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.876 [2024-10-06 11:30:00.322173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.876 [2024-10-06 11:30:00.324904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.876 9501.00 IOPS, 37.11 MiB/s [2024-10-06 11:30:00.335740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.876 [2024-10-06 11:30:00.336230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.876 [2024-10-06 11:30:00.336277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.876 [2024-10-06 11:30:00.336300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.876 [2024-10-06 11:30:00.336672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.876 [2024-10-06 11:30:00.336844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.876 [2024-10-06 11:30:00.336852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.876 [2024-10-06 11:30:00.336858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.876 [2024-10-06 11:30:00.339596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.876 [2024-10-06 11:30:00.348814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.876 [2024-10-06 11:30:00.349288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.876 [2024-10-06 11:30:00.349306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.876 [2024-10-06 11:30:00.349313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.876 [2024-10-06 11:30:00.349485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.876 [2024-10-06 11:30:00.349656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.876 [2024-10-06 11:30:00.349664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.876 [2024-10-06 11:30:00.349671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.876 [2024-10-06 11:30:00.352478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.876 [2024-10-06 11:30:00.362202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.876 [2024-10-06 11:30:00.362558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.876 [2024-10-06 11:30:00.362576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.876 [2024-10-06 11:30:00.362585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.876 [2024-10-06 11:30:00.362779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.876 [2024-10-06 11:30:00.362973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.876 [2024-10-06 11:30:00.362987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.876 [2024-10-06 11:30:00.362994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.876 [2024-10-06 11:30:00.366115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.876 [2024-10-06 11:30:00.375656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.876 [2024-10-06 11:30:00.376053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.876 [2024-10-06 11:30:00.376079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.876 [2024-10-06 11:30:00.376088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.876 [2024-10-06 11:30:00.376283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.876 [2024-10-06 11:30:00.376478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.876 [2024-10-06 11:30:00.376487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.876 [2024-10-06 11:30:00.376495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.876 [2024-10-06 11:30:00.379505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.876 [2024-10-06 11:30:00.388665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.876 [2024-10-06 11:30:00.389041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.877 [2024-10-06 11:30:00.389108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.877 [2024-10-06 11:30:00.389132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.877 [2024-10-06 11:30:00.389708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.877 [2024-10-06 11:30:00.389880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.877 [2024-10-06 11:30:00.389889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.877 [2024-10-06 11:30:00.389895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.877 [2024-10-06 11:30:00.392638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.877 [2024-10-06 11:30:00.401695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.877 [2024-10-06 11:30:00.402020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.877 [2024-10-06 11:30:00.402037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.877 [2024-10-06 11:30:00.402044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.877 [2024-10-06 11:30:00.402224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.877 [2024-10-06 11:30:00.402396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.877 [2024-10-06 11:30:00.402404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.877 [2024-10-06 11:30:00.402410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.877 [2024-10-06 11:30:00.405147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.877 [2024-10-06 11:30:00.414676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.877 [2024-10-06 11:30:00.415151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.877 [2024-10-06 11:30:00.415196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.877 [2024-10-06 11:30:00.415219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.877 [2024-10-06 11:30:00.415796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.877 [2024-10-06 11:30:00.415997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.877 [2024-10-06 11:30:00.416006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.877 [2024-10-06 11:30:00.416012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.877 [2024-10-06 11:30:00.418761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.877 [2024-10-06 11:30:00.427650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.877 [2024-10-06 11:30:00.427982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.877 [2024-10-06 11:30:00.427999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.877 [2024-10-06 11:30:00.428006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.877 [2024-10-06 11:30:00.428184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.877 [2024-10-06 11:30:00.428356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.877 [2024-10-06 11:30:00.428365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.877 [2024-10-06 11:30:00.428371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.877 [2024-10-06 11:30:00.431111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.877 [2024-10-06 11:30:00.440632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.877 [2024-10-06 11:30:00.441105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.877 [2024-10-06 11:30:00.441123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:02.877 [2024-10-06 11:30:00.441132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:02.877 [2024-10-06 11:30:00.441304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:02.877 [2024-10-06 11:30:00.441479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.877 [2024-10-06 11:30:00.441488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.877 [2024-10-06 11:30:00.441493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.877 [2024-10-06 11:30:00.444232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.136 [2024-10-06 11:30:00.453610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.136 [2024-10-06 11:30:00.454076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.136 [2024-10-06 11:30:00.454093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.136 [2024-10-06 11:30:00.454101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.136 [2024-10-06 11:30:00.454277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.136 [2024-10-06 11:30:00.454449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.136 [2024-10-06 11:30:00.454459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.136 [2024-10-06 11:30:00.454465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.136 [2024-10-06 11:30:00.457201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.136 [2024-10-06 11:30:00.466550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.136 [2024-10-06 11:30:00.467017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.136 [2024-10-06 11:30:00.467034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.136 [2024-10-06 11:30:00.467042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.136 [2024-10-06 11:30:00.467220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.136 [2024-10-06 11:30:00.467392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.136 [2024-10-06 11:30:00.467401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.136 [2024-10-06 11:30:00.467407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.136 [2024-10-06 11:30:00.470145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.136 [2024-10-06 11:30:00.479512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.136 [2024-10-06 11:30:00.479997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.136 [2024-10-06 11:30:00.480014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.136 [2024-10-06 11:30:00.480021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.136 [2024-10-06 11:30:00.480199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.136 [2024-10-06 11:30:00.480371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.136 [2024-10-06 11:30:00.480379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.136 [2024-10-06 11:30:00.480385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.136 [2024-10-06 11:30:00.483115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.136 [2024-10-06 11:30:00.492369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.136 [2024-10-06 11:30:00.492827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.136 [2024-10-06 11:30:00.492842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.136 [2024-10-06 11:30:00.492850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.136 [2024-10-06 11:30:00.493031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.136 [2024-10-06 11:30:00.493208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.136 [2024-10-06 11:30:00.493216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.136 [2024-10-06 11:30:00.493226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.136 [2024-10-06 11:30:00.495950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.137 [2024-10-06 11:30:00.505305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.137 [2024-10-06 11:30:00.505787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.137 [2024-10-06 11:30:00.505803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.137 [2024-10-06 11:30:00.505811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.137 [2024-10-06 11:30:00.505982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.137 [2024-10-06 11:30:00.506161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.137 [2024-10-06 11:30:00.506169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.137 [2024-10-06 11:30:00.506175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.137 [2024-10-06 11:30:00.508921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.137 [2024-10-06 11:30:00.518278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.137 [2024-10-06 11:30:00.518770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.137 [2024-10-06 11:30:00.518787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.137 [2024-10-06 11:30:00.518794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.137 [2024-10-06 11:30:00.518965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.137 [2024-10-06 11:30:00.519143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.137 [2024-10-06 11:30:00.519152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.137 [2024-10-06 11:30:00.519158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.137 [2024-10-06 11:30:00.521888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.137 [2024-10-06 11:30:00.531255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.137 [2024-10-06 11:30:00.531728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.137 [2024-10-06 11:30:00.531773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.137 [2024-10-06 11:30:00.531796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.137 [2024-10-06 11:30:00.532387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.137 [2024-10-06 11:30:00.532873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.137 [2024-10-06 11:30:00.532882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.137 [2024-10-06 11:30:00.532903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.137 [2024-10-06 11:30:00.537328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.137 [2024-10-06 11:30:00.545086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.137 [2024-10-06 11:30:00.545427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.137 [2024-10-06 11:30:00.545443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.137 [2024-10-06 11:30:00.545451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.137 [2024-10-06 11:30:00.545634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.137 [2024-10-06 11:30:00.545817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.137 [2024-10-06 11:30:00.545826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.137 [2024-10-06 11:30:00.545834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.137 [2024-10-06 11:30:00.548750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.137 [2024-10-06 11:30:00.557971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.137 [2024-10-06 11:30:00.558453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.137 [2024-10-06 11:30:00.558470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.137 [2024-10-06 11:30:00.558477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.137 [2024-10-06 11:30:00.558649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.137 [2024-10-06 11:30:00.558821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.137 [2024-10-06 11:30:00.558830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.137 [2024-10-06 11:30:00.558836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.137 [2024-10-06 11:30:00.561572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.137 [2024-10-06 11:30:00.570973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.137 [2024-10-06 11:30:00.571372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.137 [2024-10-06 11:30:00.571389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.137 [2024-10-06 11:30:00.571397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.137 [2024-10-06 11:30:00.571567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.137 [2024-10-06 11:30:00.571739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.137 [2024-10-06 11:30:00.571747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.137 [2024-10-06 11:30:00.571753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.137 [2024-10-06 11:30:00.574493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.137 [2024-10-06 11:30:00.584034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.137 [2024-10-06 11:30:00.584530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.137 [2024-10-06 11:30:00.584575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.137 [2024-10-06 11:30:00.584598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.137 [2024-10-06 11:30:00.585097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.137 [2024-10-06 11:30:00.585276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.137 [2024-10-06 11:30:00.585284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.137 [2024-10-06 11:30:00.585291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.137 [2024-10-06 11:30:00.588031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.137 [2024-10-06 11:30:00.597085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.137 [2024-10-06 11:30:00.597530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.137 [2024-10-06 11:30:00.597547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.137 [2024-10-06 11:30:00.597554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.137 [2024-10-06 11:30:00.597725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.137 [2024-10-06 11:30:00.597896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.137 [2024-10-06 11:30:00.597905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.137 [2024-10-06 11:30:00.597910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.137 [2024-10-06 11:30:00.600648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.137 [2024-10-06 11:30:00.610175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.137 [2024-10-06 11:30:00.610616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.137 [2024-10-06 11:30:00.610633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.137 [2024-10-06 11:30:00.610640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.137 [2024-10-06 11:30:00.610812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.137 [2024-10-06 11:30:00.610984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.137 [2024-10-06 11:30:00.610992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.137 [2024-10-06 11:30:00.610999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.137 [2024-10-06 11:30:00.613737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.137 [2024-10-06 11:30:00.623267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.137 [2024-10-06 11:30:00.623661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.137 [2024-10-06 11:30:00.623704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.137 [2024-10-06 11:30:00.623727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.137 [2024-10-06 11:30:00.624196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.137 [2024-10-06 11:30:00.624368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.137 [2024-10-06 11:30:00.624377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.137 [2024-10-06 11:30:00.624383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.137 [2024-10-06 11:30:00.627122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.137 [2024-10-06 11:30:00.636319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.137 [2024-10-06 11:30:00.636771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.137 [2024-10-06 11:30:00.636816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.137 [2024-10-06 11:30:00.636839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.137 [2024-10-06 11:30:00.637298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.137 [2024-10-06 11:30:00.637471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.137 [2024-10-06 11:30:00.637480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.137 [2024-10-06 11:30:00.637486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.137 [2024-10-06 11:30:00.640190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.137 [2024-10-06 11:30:00.649354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.137 [2024-10-06 11:30:00.649826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.137 [2024-10-06 11:30:00.649871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.137 [2024-10-06 11:30:00.649893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.137 [2024-10-06 11:30:00.650329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.137 [2024-10-06 11:30:00.650502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.137 [2024-10-06 11:30:00.650510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.137 [2024-10-06 11:30:00.650516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.137 [2024-10-06 11:30:00.653251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.137 [2024-10-06 11:30:00.662456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.137 [2024-10-06 11:30:00.662897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.137 [2024-10-06 11:30:00.662913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.137 [2024-10-06 11:30:00.662920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.137 [2024-10-06 11:30:00.663097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.137 [2024-10-06 11:30:00.663269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.137 [2024-10-06 11:30:00.663277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.137 [2024-10-06 11:30:00.663284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.137 [2024-10-06 11:30:00.666015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.137 [2024-10-06 11:30:00.675539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.137 [2024-10-06 11:30:00.676008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.137 [2024-10-06 11:30:00.676029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.137 [2024-10-06 11:30:00.676036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.137 [2024-10-06 11:30:00.676219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.137 [2024-10-06 11:30:00.676393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.137 [2024-10-06 11:30:00.676401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.137 [2024-10-06 11:30:00.676408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.137 [2024-10-06 11:30:00.679144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.137 [2024-10-06 11:30:00.688521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.137 [2024-10-06 11:30:00.688983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.137 [2024-10-06 11:30:00.688999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.137 [2024-10-06 11:30:00.689007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.137 [2024-10-06 11:30:00.689184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.137 [2024-10-06 11:30:00.689356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.137 [2024-10-06 11:30:00.689365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.137 [2024-10-06 11:30:00.689371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.137 [2024-10-06 11:30:00.692110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.137 [2024-10-06 11:30:00.701460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.137 [2024-10-06 11:30:00.701920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.137 [2024-10-06 11:30:00.701937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.137 [2024-10-06 11:30:00.701944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.137 [2024-10-06 11:30:00.702125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.137 [2024-10-06 11:30:00.702296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.137 [2024-10-06 11:30:00.702304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.137 [2024-10-06 11:30:00.702311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.137 [2024-10-06 11:30:00.705043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.397 [2024-10-06 11:30:00.714412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.397 [2024-10-06 11:30:00.714854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.397 [2024-10-06 11:30:00.714871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.397 [2024-10-06 11:30:00.714878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.397 [2024-10-06 11:30:00.715050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.397 [2024-10-06 11:30:00.715234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.397 [2024-10-06 11:30:00.715242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.397 [2024-10-06 11:30:00.715248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.397 [2024-10-06 11:30:00.717977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.397 [2024-10-06 11:30:00.727365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.397 [2024-10-06 11:30:00.727860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.397 [2024-10-06 11:30:00.727903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.397 [2024-10-06 11:30:00.727926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.397 [2024-10-06 11:30:00.728378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.397 [2024-10-06 11:30:00.728551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.397 [2024-10-06 11:30:00.728559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.397 [2024-10-06 11:30:00.728565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.397 [2024-10-06 11:30:00.731305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.397 [2024-10-06 11:30:00.740352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.397 [2024-10-06 11:30:00.740814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.397 [2024-10-06 11:30:00.740852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.397 [2024-10-06 11:30:00.740876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.397 [2024-10-06 11:30:00.741412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.397 [2024-10-06 11:30:00.741584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.397 [2024-10-06 11:30:00.741592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.397 [2024-10-06 11:30:00.741599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.397 [2024-10-06 11:30:00.744335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.397 [2024-10-06 11:30:00.753378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.397 [2024-10-06 11:30:00.753798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.397 [2024-10-06 11:30:00.753814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.397 [2024-10-06 11:30:00.753822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.397 [2024-10-06 11:30:00.753993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.397 [2024-10-06 11:30:00.754174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.397 [2024-10-06 11:30:00.754183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.397 [2024-10-06 11:30:00.754190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.397 [2024-10-06 11:30:00.756963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.397 [2024-10-06 11:30:00.766334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.397 [2024-10-06 11:30:00.766785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.397 [2024-10-06 11:30:00.766802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.397 [2024-10-06 11:30:00.766810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.397 [2024-10-06 11:30:00.766982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.397 [2024-10-06 11:30:00.767159] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.397 [2024-10-06 11:30:00.767168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.397 [2024-10-06 11:30:00.767175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.397 [2024-10-06 11:30:00.769906] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.397 [2024-10-06 11:30:00.779268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.397 [2024-10-06 11:30:00.779748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.397 [2024-10-06 11:30:00.779793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.397 [2024-10-06 11:30:00.779817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.397 [2024-10-06 11:30:00.780357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.397 [2024-10-06 11:30:00.780544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.398 [2024-10-06 11:30:00.780553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.398 [2024-10-06 11:30:00.780559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.398 [2024-10-06 11:30:00.783291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.398 [2024-10-06 11:30:00.792348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.398 [2024-10-06 11:30:00.792822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.398 [2024-10-06 11:30:00.792839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.398 [2024-10-06 11:30:00.792846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.398 [2024-10-06 11:30:00.793018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.398 [2024-10-06 11:30:00.793196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.398 [2024-10-06 11:30:00.793205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.398 [2024-10-06 11:30:00.793211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.398 [2024-10-06 11:30:00.795829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.398 [2024-10-06 11:30:00.805120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.398 [2024-10-06 11:30:00.805535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.398 [2024-10-06 11:30:00.805579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.398 [2024-10-06 11:30:00.805609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.398 [2024-10-06 11:30:00.806204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.398 [2024-10-06 11:30:00.806706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.398 [2024-10-06 11:30:00.806714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.398 [2024-10-06 11:30:00.806720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.398 [2024-10-06 11:30:00.809315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.398 [2024-10-06 11:30:00.817837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.398 [2024-10-06 11:30:00.818311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.398 [2024-10-06 11:30:00.818356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.398 [2024-10-06 11:30:00.818380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.398 [2024-10-06 11:30:00.818747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.398 [2024-10-06 11:30:00.818914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.398 [2024-10-06 11:30:00.818922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.398 [2024-10-06 11:30:00.818928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.398 [2024-10-06 11:30:00.821525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.398 [2024-10-06 11:30:00.830677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.398 [2024-10-06 11:30:00.831154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.398 [2024-10-06 11:30:00.831171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.398 [2024-10-06 11:30:00.831177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.398 [2024-10-06 11:30:00.831348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.398 [2024-10-06 11:30:00.831505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.398 [2024-10-06 11:30:00.831513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.398 [2024-10-06 11:30:00.831518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.398 [2024-10-06 11:30:00.834099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.398 [2024-10-06 11:30:00.843378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.398 [2024-10-06 11:30:00.843872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.398 [2024-10-06 11:30:00.843917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.398 [2024-10-06 11:30:00.843940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.398 [2024-10-06 11:30:00.844434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.398 [2024-10-06 11:30:00.844602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.398 [2024-10-06 11:30:00.844613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.398 [2024-10-06 11:30:00.844619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.398 [2024-10-06 11:30:00.847217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.398 [2024-10-06 11:30:00.856153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.398 [2024-10-06 11:30:00.856607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.398 [2024-10-06 11:30:00.856624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.398 [2024-10-06 11:30:00.856631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.398 [2024-10-06 11:30:00.856802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.398 [2024-10-06 11:30:00.856973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.398 [2024-10-06 11:30:00.856981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.398 [2024-10-06 11:30:00.856988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.398 [2024-10-06 11:30:00.859722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.398 [2024-10-06 11:30:00.869123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.398 [2024-10-06 11:30:00.869583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.398 [2024-10-06 11:30:00.869599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.398 [2024-10-06 11:30:00.869607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.398 [2024-10-06 11:30:00.869779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.398 [2024-10-06 11:30:00.869951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.398 [2024-10-06 11:30:00.869959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.398 [2024-10-06 11:30:00.869966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.398 [2024-10-06 11:30:00.872683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.398 [2024-10-06 11:30:00.882083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.398 [2024-10-06 11:30:00.882537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.398 [2024-10-06 11:30:00.882582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.398 [2024-10-06 11:30:00.882605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.398 [2024-10-06 11:30:00.883021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.398 [2024-10-06 11:30:00.883196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.398 [2024-10-06 11:30:00.883205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.398 [2024-10-06 11:30:00.883211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.398 [2024-10-06 11:30:00.885877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.398 [2024-10-06 11:30:00.894895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.398 [2024-10-06 11:30:00.895403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.398 [2024-10-06 11:30:00.895448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.398 [2024-10-06 11:30:00.895472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.398 [2024-10-06 11:30:00.895969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.398 [2024-10-06 11:30:00.896143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.398 [2024-10-06 11:30:00.896151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.398 [2024-10-06 11:30:00.896157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.398 [2024-10-06 11:30:00.898752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.398 [2024-10-06 11:30:00.907657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.398 [2024-10-06 11:30:00.908135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.398 [2024-10-06 11:30:00.908152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.398 [2024-10-06 11:30:00.908160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.399 [2024-10-06 11:30:00.908327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.399 [2024-10-06 11:30:00.908498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.399 [2024-10-06 11:30:00.908506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.399 [2024-10-06 11:30:00.908512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.399 [2024-10-06 11:30:00.911114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.399 [2024-10-06 11:30:00.920415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.399 [2024-10-06 11:30:00.920824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.399 [2024-10-06 11:30:00.920840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.399 [2024-10-06 11:30:00.920848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.399 [2024-10-06 11:30:00.921014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.399 [2024-10-06 11:30:00.921189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.399 [2024-10-06 11:30:00.921198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.399 [2024-10-06 11:30:00.921205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.399 [2024-10-06 11:30:00.923801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.399 [2024-10-06 11:30:00.933249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.399 [2024-10-06 11:30:00.933733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.399 [2024-10-06 11:30:00.933778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.399 [2024-10-06 11:30:00.933802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.399 [2024-10-06 11:30:00.934406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.399 [2024-10-06 11:30:00.934948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.399 [2024-10-06 11:30:00.934956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.399 [2024-10-06 11:30:00.934963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.399 [2024-10-06 11:30:00.937558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.399 [2024-10-06 11:30:00.946067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.399 [2024-10-06 11:30:00.946481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.399 [2024-10-06 11:30:00.946525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.399 [2024-10-06 11:30:00.946548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.399 [2024-10-06 11:30:00.947074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.399 [2024-10-06 11:30:00.947242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.399 [2024-10-06 11:30:00.947250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.399 [2024-10-06 11:30:00.947256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.399 [2024-10-06 11:30:00.951439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.399 [2024-10-06 11:30:00.959697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.399 [2024-10-06 11:30:00.960135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.399 [2024-10-06 11:30:00.960153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.399 [2024-10-06 11:30:00.960160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.399 [2024-10-06 11:30:00.960342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.399 [2024-10-06 11:30:00.960529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.399 [2024-10-06 11:30:00.960538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.399 [2024-10-06 11:30:00.960544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.399 [2024-10-06 11:30:00.963456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.659 [2024-10-06 11:30:00.972605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.659 [2024-10-06 11:30:00.973070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.659 [2024-10-06 11:30:00.973114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.659 [2024-10-06 11:30:00.973138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.659 [2024-10-06 11:30:00.973718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.659 [2024-10-06 11:30:00.973915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.659 [2024-10-06 11:30:00.973924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.659 [2024-10-06 11:30:00.973934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.659 [2024-10-06 11:30:00.976594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.659 [2024-10-06 11:30:00.985440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.659 [2024-10-06 11:30:00.985936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.659 [2024-10-06 11:30:00.985979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.659 [2024-10-06 11:30:00.986002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.659 [2024-10-06 11:30:00.986592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.659 [2024-10-06 11:30:00.987165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.659 [2024-10-06 11:30:00.987174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.659 [2024-10-06 11:30:00.987181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.659 [2024-10-06 11:30:00.991187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.659 [2024-10-06 11:30:00.999557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.659 [2024-10-06 11:30:01.000042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.659 [2024-10-06 11:30:01.000098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.659 [2024-10-06 11:30:01.000123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.659 [2024-10-06 11:30:01.000703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.659 [2024-10-06 11:30:01.001250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.659 [2024-10-06 11:30:01.001259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.659 [2024-10-06 11:30:01.001266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.659 [2024-10-06 11:30:01.004178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.659 [2024-10-06 11:30:01.012372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.659 [2024-10-06 11:30:01.012834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.659 [2024-10-06 11:30:01.012879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.659 [2024-10-06 11:30:01.012903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.659 [2024-10-06 11:30:01.013496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.659 [2024-10-06 11:30:01.014088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.659 [2024-10-06 11:30:01.014115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.659 [2024-10-06 11:30:01.014135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.659 [2024-10-06 11:30:01.016755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.659 [2024-10-06 11:30:01.025177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.659 [2024-10-06 11:30:01.025626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.659 [2024-10-06 11:30:01.025642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.659 [2024-10-06 11:30:01.025649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.659 [2024-10-06 11:30:01.025816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.659 [2024-10-06 11:30:01.025983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.659 [2024-10-06 11:30:01.025990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.659 [2024-10-06 11:30:01.025996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.659 [2024-10-06 11:30:01.028598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.659 [2024-10-06 11:30:01.037900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.659 [2024-10-06 11:30:01.038356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.659 [2024-10-06 11:30:01.038373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.659 [2024-10-06 11:30:01.038380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.659 [2024-10-06 11:30:01.038546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.659 [2024-10-06 11:30:01.038712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.659 [2024-10-06 11:30:01.038720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.660 [2024-10-06 11:30:01.038726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.660 [2024-10-06 11:30:01.041325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.660 [2024-10-06 11:30:01.050630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.660 [2024-10-06 11:30:01.051084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.660 [2024-10-06 11:30:01.051101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.660 [2024-10-06 11:30:01.051108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.660 [2024-10-06 11:30:01.051275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.660 [2024-10-06 11:30:01.051441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.660 [2024-10-06 11:30:01.051449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.660 [2024-10-06 11:30:01.051456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.660 [2024-10-06 11:30:01.054048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.660 [2024-10-06 11:30:01.063447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.660 [2024-10-06 11:30:01.063901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.660 [2024-10-06 11:30:01.063917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.660 [2024-10-06 11:30:01.063925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.660 [2024-10-06 11:30:01.064103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.660 [2024-10-06 11:30:01.064274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.660 [2024-10-06 11:30:01.064282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.660 [2024-10-06 11:30:01.064288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.660 [2024-10-06 11:30:01.066885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.660 [2024-10-06 11:30:01.076178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.660 [2024-10-06 11:30:01.076644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.660 [2024-10-06 11:30:01.076661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.660 [2024-10-06 11:30:01.076668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.660 [2024-10-06 11:30:01.076834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.660 [2024-10-06 11:30:01.077000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.660 [2024-10-06 11:30:01.077008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.660 [2024-10-06 11:30:01.077014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.660 [2024-10-06 11:30:01.079611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.660 [2024-10-06 11:30:01.088916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.660 [2024-10-06 11:30:01.089395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.660 [2024-10-06 11:30:01.089440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.660 [2024-10-06 11:30:01.089463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.660 [2024-10-06 11:30:01.090083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.660 [2024-10-06 11:30:01.090461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.660 [2024-10-06 11:30:01.090469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.660 [2024-10-06 11:30:01.090476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.660 [2024-10-06 11:30:01.093077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.660 [2024-10-06 11:30:01.101637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.660 [2024-10-06 11:30:01.102117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.660 [2024-10-06 11:30:01.102134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.660 [2024-10-06 11:30:01.102141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.660 [2024-10-06 11:30:01.102308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.660 [2024-10-06 11:30:01.102475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.660 [2024-10-06 11:30:01.102482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.660 [2024-10-06 11:30:01.102489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.660 [2024-10-06 11:30:01.105096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.660 [2024-10-06 11:30:01.114404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.660 [2024-10-06 11:30:01.114904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.660 [2024-10-06 11:30:01.114921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.660 [2024-10-06 11:30:01.114928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.660 [2024-10-06 11:30:01.115108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.660 [2024-10-06 11:30:01.115280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.660 [2024-10-06 11:30:01.115288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.660 [2024-10-06 11:30:01.115296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.660 [2024-10-06 11:30:01.118028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.660 [2024-10-06 11:30:01.127489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.660 [2024-10-06 11:30:01.127952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.660 [2024-10-06 11:30:01.127968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.660 [2024-10-06 11:30:01.127976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.660 [2024-10-06 11:30:01.128156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.660 [2024-10-06 11:30:01.128328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.660 [2024-10-06 11:30:01.128336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.660 [2024-10-06 11:30:01.128342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.660 [2024-10-06 11:30:01.131077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.660 [2024-10-06 11:30:01.140352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.660 [2024-10-06 11:30:01.140766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.660 [2024-10-06 11:30:01.140783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.660 [2024-10-06 11:30:01.140790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.660 [2024-10-06 11:30:01.140957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.660 [2024-10-06 11:30:01.141130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.660 [2024-10-06 11:30:01.141139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.660 [2024-10-06 11:30:01.141145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.660 [2024-10-06 11:30:01.143804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.660 [2024-10-06 11:30:01.153090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.660 [2024-10-06 11:30:01.153477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.660 [2024-10-06 11:30:01.153493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.660 [2024-10-06 11:30:01.153503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.660 [2024-10-06 11:30:01.153661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.660 [2024-10-06 11:30:01.153818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.660 [2024-10-06 11:30:01.153826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.660 [2024-10-06 11:30:01.153832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.660 [2024-10-06 11:30:01.156435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.660 [2024-10-06 11:30:01.165883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.660 [2024-10-06 11:30:01.166374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.660 [2024-10-06 11:30:01.166391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.660 [2024-10-06 11:30:01.166398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.660 [2024-10-06 11:30:01.166565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.660 [2024-10-06 11:30:01.166732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.661 [2024-10-06 11:30:01.166739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.661 [2024-10-06 11:30:01.166745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.661 [2024-10-06 11:30:01.169351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.661 [2024-10-06 11:30:01.178650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.661 [2024-10-06 11:30:01.179089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.661 [2024-10-06 11:30:01.179106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.661 [2024-10-06 11:30:01.179113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.661 [2024-10-06 11:30:01.179280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.661 [2024-10-06 11:30:01.179446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.661 [2024-10-06 11:30:01.179454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.661 [2024-10-06 11:30:01.179460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.661 [2024-10-06 11:30:01.182055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.661 [2024-10-06 11:30:01.191372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.661 [2024-10-06 11:30:01.191767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.661 [2024-10-06 11:30:01.191783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.661 [2024-10-06 11:30:01.191790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.661 [2024-10-06 11:30:01.191958] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.661 [2024-10-06 11:30:01.192130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.661 [2024-10-06 11:30:01.192145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.661 [2024-10-06 11:30:01.192152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.661 [2024-10-06 11:30:01.194742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.661 [2024-10-06 11:30:01.204181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.661 [2024-10-06 11:30:01.204575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.661 [2024-10-06 11:30:01.204620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.661 [2024-10-06 11:30:01.204643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.661 [2024-10-06 11:30:01.205235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.661 [2024-10-06 11:30:01.205711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.661 [2024-10-06 11:30:01.205719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.661 [2024-10-06 11:30:01.205725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.661 [2024-10-06 11:30:01.208321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.661 [2024-10-06 11:30:01.217027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.661 [2024-10-06 11:30:01.217503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.661 [2024-10-06 11:30:01.217519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.661 [2024-10-06 11:30:01.217526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.661 [2024-10-06 11:30:01.217693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.661 [2024-10-06 11:30:01.217860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.661 [2024-10-06 11:30:01.217868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.661 [2024-10-06 11:30:01.217874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.661 [2024-10-06 11:30:01.220480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.661 [2024-10-06 11:30:01.229935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.661 [2024-10-06 11:30:01.230412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.661 [2024-10-06 11:30:01.230429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.661 [2024-10-06 11:30:01.230436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.661 [2024-10-06 11:30:01.230602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.661 [2024-10-06 11:30:01.230768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.661 [2024-10-06 11:30:01.230776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.661 [2024-10-06 11:30:01.230782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.921 [2024-10-06 11:30:01.233448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.921 [2024-10-06 11:30:01.242707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.921 [2024-10-06 11:30:01.243160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.921 [2024-10-06 11:30:01.243177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.921 [2024-10-06 11:30:01.243184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.921 [2024-10-06 11:30:01.243353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.921 [2024-10-06 11:30:01.243510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.921 [2024-10-06 11:30:01.243518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.921 [2024-10-06 11:30:01.243524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.921 [2024-10-06 11:30:01.246108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.921 [2024-10-06 11:30:01.255534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.921 [2024-10-06 11:30:01.255928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.921 [2024-10-06 11:30:01.255945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.921 [2024-10-06 11:30:01.255952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.921 [2024-10-06 11:30:01.256123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.921 [2024-10-06 11:30:01.256290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.921 [2024-10-06 11:30:01.256298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.921 [2024-10-06 11:30:01.256305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.921 [2024-10-06 11:30:01.258897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.921 [2024-10-06 11:30:01.268374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.921 [2024-10-06 11:30:01.268828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.921 [2024-10-06 11:30:01.268873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.921 [2024-10-06 11:30:01.268896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.921 [2024-10-06 11:30:01.269488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.921 [2024-10-06 11:30:01.269928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.921 [2024-10-06 11:30:01.269936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.921 [2024-10-06 11:30:01.269942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.921 [2024-10-06 11:30:01.272536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.921 [2024-10-06 11:30:01.281086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.921 [2024-10-06 11:30:01.281538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.921 [2024-10-06 11:30:01.281555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.921 [2024-10-06 11:30:01.281566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.921 [2024-10-06 11:30:01.281732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.921 [2024-10-06 11:30:01.281900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.921 [2024-10-06 11:30:01.281908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.921 [2024-10-06 11:30:01.281914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.921 [2024-10-06 11:30:01.284515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.921 [2024-10-06 11:30:01.293817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.921 [2024-10-06 11:30:01.294297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.921 [2024-10-06 11:30:01.294343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.921 [2024-10-06 11:30:01.294366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.921 [2024-10-06 11:30:01.294753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.922 [2024-10-06 11:30:01.294911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.922 [2024-10-06 11:30:01.294919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.922 [2024-10-06 11:30:01.294925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.922 [2024-10-06 11:30:01.297534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.922 [2024-10-06 11:30:01.306550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.922 [2024-10-06 11:30:01.306968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.922 [2024-10-06 11:30:01.306984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.922 [2024-10-06 11:30:01.306991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.922 [2024-10-06 11:30:01.307166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.922 [2024-10-06 11:30:01.307332] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.922 [2024-10-06 11:30:01.307340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.922 [2024-10-06 11:30:01.307346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.922 [2024-10-06 11:30:01.309946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.922 [2024-10-06 11:30:01.319274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.922 [2024-10-06 11:30:01.319726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.922 [2024-10-06 11:30:01.319742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.922 [2024-10-06 11:30:01.319748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.922 [2024-10-06 11:30:01.319914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.922 [2024-10-06 11:30:01.320087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.922 [2024-10-06 11:30:01.320099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.922 [2024-10-06 11:30:01.320105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.922 [2024-10-06 11:30:01.322702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.922 [2024-10-06 11:30:01.332011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.922 [2024-10-06 11:30:01.332455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.922 [2024-10-06 11:30:01.332499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.922 [2024-10-06 11:30:01.332522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.922 [2024-10-06 11:30:01.333019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.922 [2024-10-06 11:30:01.333194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.922 [2024-10-06 11:30:01.333203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.922 [2024-10-06 11:30:01.333208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.922 7125.75 IOPS, 27.83 MiB/s [2024-10-06 11:30:01.336976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.922 [2024-10-06 11:30:01.344803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.922 [2024-10-06 11:30:01.345293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.922 [2024-10-06 11:30:01.345340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.922 [2024-10-06 11:30:01.345363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.922 [2024-10-06 11:30:01.345837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.922 [2024-10-06 11:30:01.346005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.922 [2024-10-06 11:30:01.346013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.922 [2024-10-06 11:30:01.346019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.922 [2024-10-06 11:30:01.348626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.922 [2024-10-06 11:30:01.357641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.922 [2024-10-06 11:30:01.358079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.922 [2024-10-06 11:30:01.358125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.922 [2024-10-06 11:30:01.358149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.922 [2024-10-06 11:30:01.358648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.922 [2024-10-06 11:30:01.358815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.922 [2024-10-06 11:30:01.358823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.922 [2024-10-06 11:30:01.358829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.922 [2024-10-06 11:30:01.361402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.922 [2024-10-06 11:30:01.370396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.922 [2024-10-06 11:30:01.370860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.922 [2024-10-06 11:30:01.370877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.922 [2024-10-06 11:30:01.370885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.922 [2024-10-06 11:30:01.371057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.922 [2024-10-06 11:30:01.371233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.922 [2024-10-06 11:30:01.371242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.922 [2024-10-06 11:30:01.371249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.922 [2024-10-06 11:30:01.373981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.922 [2024-10-06 11:30:01.383463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.922 [2024-10-06 11:30:01.383964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.922 [2024-10-06 11:30:01.383980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.922 [2024-10-06 11:30:01.383987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.922 [2024-10-06 11:30:01.384162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.922 [2024-10-06 11:30:01.384335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.922 [2024-10-06 11:30:01.384344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.922 [2024-10-06 11:30:01.384351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.922 [2024-10-06 11:30:01.387111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.922 [2024-10-06 11:30:01.396408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.922 [2024-10-06 11:30:01.396722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.922 [2024-10-06 11:30:01.396738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.922 [2024-10-06 11:30:01.396745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.922 [2024-10-06 11:30:01.396912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.922 [2024-10-06 11:30:01.397090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.922 [2024-10-06 11:30:01.397100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.922 [2024-10-06 11:30:01.397106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.922 [2024-10-06 11:30:01.399752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.922 [2024-10-06 11:30:01.409353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.922 [2024-10-06 11:30:01.409784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.922 [2024-10-06 11:30:01.409801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.922 [2024-10-06 11:30:01.409809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.922 [2024-10-06 11:30:01.409979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.922 [2024-10-06 11:30:01.410153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.922 [2024-10-06 11:30:01.410162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.922 [2024-10-06 11:30:01.410168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.922 [2024-10-06 11:30:01.412770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.922 [2024-10-06 11:30:01.422083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.922 [2024-10-06 11:30:01.422492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.922 [2024-10-06 11:30:01.422509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.922 [2024-10-06 11:30:01.422516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.923 [2024-10-06 11:30:01.422685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.923 [2024-10-06 11:30:01.422851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.923 [2024-10-06 11:30:01.422860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.923 [2024-10-06 11:30:01.422866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.923 [2024-10-06 11:30:01.425472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.923 [2024-10-06 11:30:01.434928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.923 [2024-10-06 11:30:01.435257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.923 [2024-10-06 11:30:01.435273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.923 [2024-10-06 11:30:01.435281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.923 [2024-10-06 11:30:01.435447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.923 [2024-10-06 11:30:01.435613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.923 [2024-10-06 11:30:01.435621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.923 [2024-10-06 11:30:01.435628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.923 [2024-10-06 11:30:01.438305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.923 [2024-10-06 11:30:01.447890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.923 [2024-10-06 11:30:01.448220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.923 [2024-10-06 11:30:01.448237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.923 [2024-10-06 11:30:01.448244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.923 [2024-10-06 11:30:01.448412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.923 [2024-10-06 11:30:01.448581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.923 [2024-10-06 11:30:01.448590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.923 [2024-10-06 11:30:01.448600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.923 [2024-10-06 11:30:01.451301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.923 [2024-10-06 11:30:01.460886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.923 [2024-10-06 11:30:01.461224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.923 [2024-10-06 11:30:01.461241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.923 [2024-10-06 11:30:01.461248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.923 [2024-10-06 11:30:01.461420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.923 [2024-10-06 11:30:01.461591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.923 [2024-10-06 11:30:01.461599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.923 [2024-10-06 11:30:01.461606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.923 [2024-10-06 11:30:01.464322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.923 [2024-10-06 11:30:01.473823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.923 [2024-10-06 11:30:01.474219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.923 [2024-10-06 11:30:01.474237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.923 [2024-10-06 11:30:01.474244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.923 [2024-10-06 11:30:01.474412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.923 [2024-10-06 11:30:01.474579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.923 [2024-10-06 11:30:01.474587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.923 [2024-10-06 11:30:01.474593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.923 [2024-10-06 11:30:01.477254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:03.923 [2024-10-06 11:30:01.486708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:03.923 [2024-10-06 11:30:01.487079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:03.923 [2024-10-06 11:30:01.487095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:03.923 [2024-10-06 11:30:01.487103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:03.923 [2024-10-06 11:30:01.487270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:03.923 [2024-10-06 11:30:01.487462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:03.923 [2024-10-06 11:30:01.487470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:03.923 [2024-10-06 11:30:01.487476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:03.923 [2024-10-06 11:30:01.490142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.229 [2024-10-06 11:30:01.499737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.229 [2024-10-06 11:30:01.500137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.229 [2024-10-06 11:30:01.500155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.229 [2024-10-06 11:30:01.500162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.229 [2024-10-06 11:30:01.500334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.229 [2024-10-06 11:30:01.500505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.229 [2024-10-06 11:30:01.500513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.229 [2024-10-06 11:30:01.500519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.229 [2024-10-06 11:30:01.503261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.229 [2024-10-06 11:30:01.512912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.229 [2024-10-06 11:30:01.513327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.229 [2024-10-06 11:30:01.513344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.229 [2024-10-06 11:30:01.513352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.229 [2024-10-06 11:30:01.513534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.229 [2024-10-06 11:30:01.513721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.229 [2024-10-06 11:30:01.513730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.229 [2024-10-06 11:30:01.513737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.229 [2024-10-06 11:30:01.516656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.229 [2024-10-06 11:30:01.526550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.229 [2024-10-06 11:30:01.526916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.229 [2024-10-06 11:30:01.526934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.229 [2024-10-06 11:30:01.526943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.229 [2024-10-06 11:30:01.527160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.229 [2024-10-06 11:30:01.527368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.229 [2024-10-06 11:30:01.527378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.229 [2024-10-06 11:30:01.527386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.229 [2024-10-06 11:30:01.530712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.229 [2024-10-06 11:30:01.540148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.229 [2024-10-06 11:30:01.540537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.229 [2024-10-06 11:30:01.540556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.229 [2024-10-06 11:30:01.540564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.229 [2024-10-06 11:30:01.540772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.229 [2024-10-06 11:30:01.540986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.229 [2024-10-06 11:30:01.540996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.229 [2024-10-06 11:30:01.541004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.229 [2024-10-06 11:30:01.544343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.229 [2024-10-06 11:30:01.553845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.229 [2024-10-06 11:30:01.554188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.229 [2024-10-06 11:30:01.554208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.229 [2024-10-06 11:30:01.554217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.229 [2024-10-06 11:30:01.554426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.229 [2024-10-06 11:30:01.554634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.229 [2024-10-06 11:30:01.554644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.229 [2024-10-06 11:30:01.554652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.229 [2024-10-06 11:30:01.557969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.229 [2024-10-06 11:30:01.567580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.229 [2024-10-06 11:30:01.568067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.229 [2024-10-06 11:30:01.568086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.229 [2024-10-06 11:30:01.568095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.229 [2024-10-06 11:30:01.568303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.229 [2024-10-06 11:30:01.568511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.229 [2024-10-06 11:30:01.568521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.229 [2024-10-06 11:30:01.568529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.229 [2024-10-06 11:30:01.571850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.229 [2024-10-06 11:30:01.581132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.229 [2024-10-06 11:30:01.581544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.229 [2024-10-06 11:30:01.581562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.229 [2024-10-06 11:30:01.581570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.229 [2024-10-06 11:30:01.581777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.229 [2024-10-06 11:30:01.581985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.229 [2024-10-06 11:30:01.581994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.229 [2024-10-06 11:30:01.582002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.229 [2024-10-06 11:30:01.585220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.229 [2024-10-06 11:30:01.594451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.229 [2024-10-06 11:30:01.594840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.229 [2024-10-06 11:30:01.594857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.229 [2024-10-06 11:30:01.594865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.230 [2024-10-06 11:30:01.595067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.230 [2024-10-06 11:30:01.595272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.230 [2024-10-06 11:30:01.595282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.230 [2024-10-06 11:30:01.595289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.230 [2024-10-06 11:30:01.598294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.230 [2024-10-06 11:30:01.607812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.230 [2024-10-06 11:30:01.608233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.230 [2024-10-06 11:30:01.608251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.230 [2024-10-06 11:30:01.608260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.230 [2024-10-06 11:30:01.608454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.230 [2024-10-06 11:30:01.608649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.230 [2024-10-06 11:30:01.608658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.230 [2024-10-06 11:30:01.608665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.230 [2024-10-06 11:30:01.611623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.230 [2024-10-06 11:30:01.621071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.230 [2024-10-06 11:30:01.621578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.230 [2024-10-06 11:30:01.621596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.230 [2024-10-06 11:30:01.621604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.230 [2024-10-06 11:30:01.621799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.230 [2024-10-06 11:30:01.621994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.230 [2024-10-06 11:30:01.622003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.230 [2024-10-06 11:30:01.622010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.230 [2024-10-06 11:30:01.625031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.230 [2024-10-06 11:30:01.634454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.230 [2024-10-06 11:30:01.634940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.230 [2024-10-06 11:30:01.634957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.230 [2024-10-06 11:30:01.634969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.230 [2024-10-06 11:30:01.635169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.230 [2024-10-06 11:30:01.635365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.230 [2024-10-06 11:30:01.635374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.230 [2024-10-06 11:30:01.635381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.230 [2024-10-06 11:30:01.638483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.230 [2024-10-06 11:30:01.647824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.230 [2024-10-06 11:30:01.648324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.230 [2024-10-06 11:30:01.648343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.230 [2024-10-06 11:30:01.648352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.230 [2024-10-06 11:30:01.648560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.230 [2024-10-06 11:30:01.648773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.230 [2024-10-06 11:30:01.648784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.230 [2024-10-06 11:30:01.648791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.230 [2024-10-06 11:30:01.652133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.230 [2024-10-06 11:30:01.661319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.230 [2024-10-06 11:30:01.661699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.230 [2024-10-06 11:30:01.661717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.230 [2024-10-06 11:30:01.661725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.230 [2024-10-06 11:30:01.661918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.230 [2024-10-06 11:30:01.662118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.230 [2024-10-06 11:30:01.662128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.230 [2024-10-06 11:30:01.662136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.230 [2024-10-06 11:30:01.665239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.230 [2024-10-06 11:30:01.674823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.230 [2024-10-06 11:30:01.675278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.230 [2024-10-06 11:30:01.675296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.230 [2024-10-06 11:30:01.675305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.230 [2024-10-06 11:30:01.675512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.230 [2024-10-06 11:30:01.675723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.230 [2024-10-06 11:30:01.675733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.230 [2024-10-06 11:30:01.675742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.230 [2024-10-06 11:30:01.678944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.230 [2024-10-06 11:30:01.688073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.230 [2024-10-06 11:30:01.688571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.230 [2024-10-06 11:30:01.688615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.230 [2024-10-06 11:30:01.688639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.230 [2024-10-06 11:30:01.689231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.230 [2024-10-06 11:30:01.689813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.230 [2024-10-06 11:30:01.689822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.230 [2024-10-06 11:30:01.689828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.230 [2024-10-06 11:30:01.692746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.230 [2024-10-06 11:30:01.701150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.230 [2024-10-06 11:30:01.701594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.230 [2024-10-06 11:30:01.701611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.230 [2024-10-06 11:30:01.701618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.230 [2024-10-06 11:30:01.701800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.230 [2024-10-06 11:30:01.701983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.230 [2024-10-06 11:30:01.701991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.230 [2024-10-06 11:30:01.701998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.230 [2024-10-06 11:30:01.704762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.230 [2024-10-06 11:30:01.714123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.230 [2024-10-06 11:30:01.714459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.230 [2024-10-06 11:30:01.714502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.230 [2024-10-06 11:30:01.714526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.230 [2024-10-06 11:30:01.715017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.230 [2024-10-06 11:30:01.715195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.230 [2024-10-06 11:30:01.715204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.230 [2024-10-06 11:30:01.715211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.230 [2024-10-06 11:30:01.717935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.230 [2024-10-06 11:30:01.726854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.230 [2024-10-06 11:30:01.727244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.230 [2024-10-06 11:30:01.727262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.231 [2024-10-06 11:30:01.727269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.231 [2024-10-06 11:30:01.727435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.231 [2024-10-06 11:30:01.727601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.231 [2024-10-06 11:30:01.727609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.231 [2024-10-06 11:30:01.727615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.231 [2024-10-06 11:30:01.730371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.231 [2024-10-06 11:30:01.739899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.231 [2024-10-06 11:30:01.740362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.231 [2024-10-06 11:30:01.740379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.231 [2024-10-06 11:30:01.740386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.231 [2024-10-06 11:30:01.740557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.231 [2024-10-06 11:30:01.740728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.231 [2024-10-06 11:30:01.740736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.231 [2024-10-06 11:30:01.740742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.231 [2024-10-06 11:30:01.743655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.231 [2024-10-06 11:30:01.753223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.231 [2024-10-06 11:30:01.753697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.231 [2024-10-06 11:30:01.753715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.231 [2024-10-06 11:30:01.753723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.231 [2024-10-06 11:30:01.753917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.231 [2024-10-06 11:30:01.754120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.231 [2024-10-06 11:30:01.754129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.231 [2024-10-06 11:30:01.754137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.231 [2024-10-06 11:30:01.757236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.231 [2024-10-06 11:30:01.766510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.231 [2024-10-06 11:30:01.767020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.231 [2024-10-06 11:30:01.767078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.231 [2024-10-06 11:30:01.767111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.231 [2024-10-06 11:30:01.767690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.231 [2024-10-06 11:30:01.768122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.231 [2024-10-06 11:30:01.768131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.231 [2024-10-06 11:30:01.768138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.231 [2024-10-06 11:30:01.771136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.231 [2024-10-06 11:30:01.779579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.231 [2024-10-06 11:30:01.779946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.231 [2024-10-06 11:30:01.779963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.231 [2024-10-06 11:30:01.779971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.231 [2024-10-06 11:30:01.780148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.231 [2024-10-06 11:30:01.780319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.231 [2024-10-06 11:30:01.780327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.231 [2024-10-06 11:30:01.780334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.231 [2024-10-06 11:30:01.783071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.231 [2024-10-06 11:30:01.792553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.231 [2024-10-06 11:30:01.792974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.231 [2024-10-06 11:30:01.792990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.231 [2024-10-06 11:30:01.792998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.231 [2024-10-06 11:30:01.793175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.231 [2024-10-06 11:30:01.793347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.231 [2024-10-06 11:30:01.793355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.231 [2024-10-06 11:30:01.793362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.231 [2024-10-06 11:30:01.796065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.490 [2024-10-06 11:30:01.805469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.490 [2024-10-06 11:30:01.805829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.490 [2024-10-06 11:30:01.805846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.490 [2024-10-06 11:30:01.805853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.490 [2024-10-06 11:30:01.806021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.490 [2024-10-06 11:30:01.806196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.490 [2024-10-06 11:30:01.806208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.490 [2024-10-06 11:30:01.806215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.490 [2024-10-06 11:30:01.808870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.490 [2024-10-06 11:30:01.818289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.490 [2024-10-06 11:30:01.818775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.490 [2024-10-06 11:30:01.818818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.490 [2024-10-06 11:30:01.818841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.490 [2024-10-06 11:30:01.819432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.490 [2024-10-06 11:30:01.819998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.490 [2024-10-06 11:30:01.820006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.490 [2024-10-06 11:30:01.820011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.490 [2024-10-06 11:30:01.822611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.490 [2024-10-06 11:30:01.831007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.490 [2024-10-06 11:30:01.831458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.490 [2024-10-06 11:30:01.831474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.490 [2024-10-06 11:30:01.831481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.490 [2024-10-06 11:30:01.831638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.490 [2024-10-06 11:30:01.831795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.490 [2024-10-06 11:30:01.831803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.490 [2024-10-06 11:30:01.831809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.490 [2024-10-06 11:30:01.834326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.490 [2024-10-06 11:30:01.843816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.490 [2024-10-06 11:30:01.844269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.490 [2024-10-06 11:30:01.844314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.490 [2024-10-06 11:30:01.844336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.491 [2024-10-06 11:30:01.844809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.491 [2024-10-06 11:30:01.844968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.491 [2024-10-06 11:30:01.844975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.491 [2024-10-06 11:30:01.844981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.491 [2024-10-06 11:30:01.847589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.491 [2024-10-06 11:30:01.856588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.491 [2024-10-06 11:30:01.856964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.491 [2024-10-06 11:30:01.856979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.491 [2024-10-06 11:30:01.856986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.491 [2024-10-06 11:30:01.857158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.491 [2024-10-06 11:30:01.857325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.491 [2024-10-06 11:30:01.857332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.491 [2024-10-06 11:30:01.857337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.491 [2024-10-06 11:30:01.859988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.491 [2024-10-06 11:30:01.869558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.491 [2024-10-06 11:30:01.870013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.491 [2024-10-06 11:30:01.870029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.491 [2024-10-06 11:30:01.870036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.491 [2024-10-06 11:30:01.870209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.491 [2024-10-06 11:30:01.870377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.491 [2024-10-06 11:30:01.870384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.491 [2024-10-06 11:30:01.870390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.491 [2024-10-06 11:30:01.872981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.491 [2024-10-06 11:30:01.882589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.491 [2024-10-06 11:30:01.882993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.491 [2024-10-06 11:30:01.883009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.491 [2024-10-06 11:30:01.883016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.491 [2024-10-06 11:30:01.883193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.491 [2024-10-06 11:30:01.883364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.491 [2024-10-06 11:30:01.883372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.491 [2024-10-06 11:30:01.883379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.491 [2024-10-06 11:30:01.886140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.491 [2024-10-06 11:30:01.895468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.491 [2024-10-06 11:30:01.895821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.491 [2024-10-06 11:30:01.895837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.491 [2024-10-06 11:30:01.895844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.491 [2024-10-06 11:30:01.896014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.491 [2024-10-06 11:30:01.896186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.491 [2024-10-06 11:30:01.896194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.491 [2024-10-06 11:30:01.896200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.491 [2024-10-06 11:30:01.898997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.491 [2024-10-06 11:30:01.908473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.491 [2024-10-06 11:30:01.908897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.491 [2024-10-06 11:30:01.908941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.491 [2024-10-06 11:30:01.908964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.491 [2024-10-06 11:30:01.909431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.491 [2024-10-06 11:30:01.909600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.491 [2024-10-06 11:30:01.909608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.491 [2024-10-06 11:30:01.909614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.491 [2024-10-06 11:30:01.912273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.491 [2024-10-06 11:30:01.921301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.491 [2024-10-06 11:30:01.921755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.491 [2024-10-06 11:30:01.921772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.491 [2024-10-06 11:30:01.921780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.491 [2024-10-06 11:30:01.921946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.491 [2024-10-06 11:30:01.922118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.491 [2024-10-06 11:30:01.922126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.491 [2024-10-06 11:30:01.922132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.491 [2024-10-06 11:30:01.924732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.491 [2024-10-06 11:30:01.934032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.491 [2024-10-06 11:30:01.934420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.491 [2024-10-06 11:30:01.934437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.491 [2024-10-06 11:30:01.934444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.491 [2024-10-06 11:30:01.934611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.491 [2024-10-06 11:30:01.934778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.491 [2024-10-06 11:30:01.934786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.491 [2024-10-06 11:30:01.934796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.491 [2024-10-06 11:30:01.937394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.491 [2024-10-06 11:30:01.946810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.491 [2024-10-06 11:30:01.947208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.491 [2024-10-06 11:30:01.947225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.491 [2024-10-06 11:30:01.947233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.491 [2024-10-06 11:30:01.947400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.491 [2024-10-06 11:30:01.947567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.491 [2024-10-06 11:30:01.947575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.491 [2024-10-06 11:30:01.947581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.491 [2024-10-06 11:30:01.950181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.491 [2024-10-06 11:30:01.959620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.491 [2024-10-06 11:30:01.960079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.491 [2024-10-06 11:30:01.960116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.491 [2024-10-06 11:30:01.960142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.491 [2024-10-06 11:30:01.960719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.491 [2024-10-06 11:30:01.961316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.491 [2024-10-06 11:30:01.961342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.491 [2024-10-06 11:30:01.961373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.491 [2024-10-06 11:30:01.963964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.491 [2024-10-06 11:30:01.972386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.492 [2024-10-06 11:30:01.972860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.492 [2024-10-06 11:30:01.972903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.492 [2024-10-06 11:30:01.972926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.492 [2024-10-06 11:30:01.973464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.492 [2024-10-06 11:30:01.973623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.492 [2024-10-06 11:30:01.973631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.492 [2024-10-06 11:30:01.973637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.492 [2024-10-06 11:30:01.976223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.492 [2024-10-06 11:30:01.985213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.492 [2024-10-06 11:30:01.985666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.492 [2024-10-06 11:30:01.985686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.492 [2024-10-06 11:30:01.985693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.492 [2024-10-06 11:30:01.985859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.492 [2024-10-06 11:30:01.986026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.492 [2024-10-06 11:30:01.986034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.492 [2024-10-06 11:30:01.986039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.492 [2024-10-06 11:30:01.988648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.492 [2024-10-06 11:30:01.997940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.492 [2024-10-06 11:30:01.998346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.492 [2024-10-06 11:30:01.998362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.492 [2024-10-06 11:30:01.998369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.492 [2024-10-06 11:30:01.998536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.492 [2024-10-06 11:30:01.998702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.492 [2024-10-06 11:30:01.998710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.492 [2024-10-06 11:30:01.998716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.492 [2024-10-06 11:30:02.001317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.492 [2024-10-06 11:30:02.010749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.492 [2024-10-06 11:30:02.011230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.492 [2024-10-06 11:30:02.011276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.492 [2024-10-06 11:30:02.011299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.492 [2024-10-06 11:30:02.011470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.492 [2024-10-06 11:30:02.011628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.492 [2024-10-06 11:30:02.011636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.492 [2024-10-06 11:30:02.011642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.492 [2024-10-06 11:30:02.014231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.492 [2024-10-06 11:30:02.023513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.492 [2024-10-06 11:30:02.024009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.492 [2024-10-06 11:30:02.024052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.492 [2024-10-06 11:30:02.024088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.492 [2024-10-06 11:30:02.024581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.492 [2024-10-06 11:30:02.024752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.492 [2024-10-06 11:30:02.024760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.492 [2024-10-06 11:30:02.024766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.492 [2024-10-06 11:30:02.027360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.492 [2024-10-06 11:30:02.036356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.492 [2024-10-06 11:30:02.036761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.492 [2024-10-06 11:30:02.036777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.492 [2024-10-06 11:30:02.036784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.492 [2024-10-06 11:30:02.036950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.492 [2024-10-06 11:30:02.037121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.492 [2024-10-06 11:30:02.037130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.492 [2024-10-06 11:30:02.037136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.492 [2024-10-06 11:30:02.039664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.492 [2024-10-06 11:30:02.049088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.492 [2024-10-06 11:30:02.049494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.492 [2024-10-06 11:30:02.049509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.492 [2024-10-06 11:30:02.049516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.492 [2024-10-06 11:30:02.049683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.492 [2024-10-06 11:30:02.049850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.492 [2024-10-06 11:30:02.049858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.492 [2024-10-06 11:30:02.049864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.492 [2024-10-06 11:30:02.052462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.492 [2024-10-06 11:30:02.062023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.492 [2024-10-06 11:30:02.062455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.492 [2024-10-06 11:30:02.062501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.492 [2024-10-06 11:30:02.062525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.492 [2024-10-06 11:30:02.063116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.492 [2024-10-06 11:30:02.063698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.492 [2024-10-06 11:30:02.063735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.492 [2024-10-06 11:30:02.063742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.752 [2024-10-06 11:30:02.066437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.752 [2024-10-06 11:30:02.074866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.752 [2024-10-06 11:30:02.075249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.752 [2024-10-06 11:30:02.075267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.752 [2024-10-06 11:30:02.075275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.752 [2024-10-06 11:30:02.075442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.752 [2024-10-06 11:30:02.075609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.752 [2024-10-06 11:30:02.075617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.752 [2024-10-06 11:30:02.075624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.752 [2024-10-06 11:30:02.078240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.752 [2024-10-06 11:30:02.087681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.752 [2024-10-06 11:30:02.088088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.752 [2024-10-06 11:30:02.088132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.752 [2024-10-06 11:30:02.088157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.752 [2024-10-06 11:30:02.088746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.752 [2024-10-06 11:30:02.089023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.752 [2024-10-06 11:30:02.089036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.752 [2024-10-06 11:30:02.089046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.752 [2024-10-06 11:30:02.093485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.752 [2024-10-06 11:30:02.101271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.752 [2024-10-06 11:30:02.101759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.752 [2024-10-06 11:30:02.101804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.752 [2024-10-06 11:30:02.101827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.752 [2024-10-06 11:30:02.102421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.752 [2024-10-06 11:30:02.102877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.752 [2024-10-06 11:30:02.102885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.752 [2024-10-06 11:30:02.102892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.752 [2024-10-06 11:30:02.105796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.752 [2024-10-06 11:30:02.114021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.752 [2024-10-06 11:30:02.114352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.752 [2024-10-06 11:30:02.114396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.752 [2024-10-06 11:30:02.114427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.752 [2024-10-06 11:30:02.114901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.752 [2024-10-06 11:30:02.115065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.752 [2024-10-06 11:30:02.115072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.752 [2024-10-06 11:30:02.115078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.752 [2024-10-06 11:30:02.117589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.752 [2024-10-06 11:30:02.126781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.752 [2024-10-06 11:30:02.127238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.752 [2024-10-06 11:30:02.127254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.752 [2024-10-06 11:30:02.127261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.752 [2024-10-06 11:30:02.127428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.752 [2024-10-06 11:30:02.127594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.752 [2024-10-06 11:30:02.127602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.752 [2024-10-06 11:30:02.127609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.752 [2024-10-06 11:30:02.130350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.752 [2024-10-06 11:30:02.139827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.752 [2024-10-06 11:30:02.140253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.752 [2024-10-06 11:30:02.140271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.752 [2024-10-06 11:30:02.140278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.752 [2024-10-06 11:30:02.140449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.752 [2024-10-06 11:30:02.140620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.752 [2024-10-06 11:30:02.140629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.752 [2024-10-06 11:30:02.140635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.752 [2024-10-06 11:30:02.143334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.752 [2024-10-06 11:30:02.152551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.752 [2024-10-06 11:30:02.153021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.752 [2024-10-06 11:30:02.153077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.752 [2024-10-06 11:30:02.153101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.752 [2024-10-06 11:30:02.153597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.752 [2024-10-06 11:30:02.153765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.752 [2024-10-06 11:30:02.153775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.752 [2024-10-06 11:30:02.153782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.752 [2024-10-06 11:30:02.156443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.752 [2024-10-06 11:30:02.165352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.752 [2024-10-06 11:30:02.165732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.752 [2024-10-06 11:30:02.165748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.752 [2024-10-06 11:30:02.165756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.752 [2024-10-06 11:30:02.165922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.752 [2024-10-06 11:30:02.166094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.752 [2024-10-06 11:30:02.166102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.752 [2024-10-06 11:30:02.166108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.753 [2024-10-06 11:30:02.168701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.753 [2024-10-06 11:30:02.178137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.753 [2024-10-06 11:30:02.178609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.753 [2024-10-06 11:30:02.178658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.753 [2024-10-06 11:30:02.178681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.753 [2024-10-06 11:30:02.179273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.753 [2024-10-06 11:30:02.179856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.753 [2024-10-06 11:30:02.179881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.753 [2024-10-06 11:30:02.179902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.753 [2024-10-06 11:30:02.184375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.753 [2024-10-06 11:30:02.191989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.753 [2024-10-06 11:30:02.192468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.753 [2024-10-06 11:30:02.192485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.753 [2024-10-06 11:30:02.192493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.753 [2024-10-06 11:30:02.192675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.753 [2024-10-06 11:30:02.192857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.753 [2024-10-06 11:30:02.192866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.753 [2024-10-06 11:30:02.192872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.753 [2024-10-06 11:30:02.195788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.753 [2024-10-06 11:30:02.204760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.753 [2024-10-06 11:30:02.205199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.753 [2024-10-06 11:30:02.205245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.753 [2024-10-06 11:30:02.205268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.753 [2024-10-06 11:30:02.205736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.753 [2024-10-06 11:30:02.205895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.753 [2024-10-06 11:30:02.205902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.753 [2024-10-06 11:30:02.205908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.753 [2024-10-06 11:30:02.208514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.753 [2024-10-06 11:30:02.217506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.753 [2024-10-06 11:30:02.217988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.753 [2024-10-06 11:30:02.218032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.753 [2024-10-06 11:30:02.218054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.753 [2024-10-06 11:30:02.218650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.753 [2024-10-06 11:30:02.218861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.753 [2024-10-06 11:30:02.218869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.753 [2024-10-06 11:30:02.218876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.753 [2024-10-06 11:30:02.221471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.753 [2024-10-06 11:30:02.230215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.753 [2024-10-06 11:30:02.230675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.753 [2024-10-06 11:30:02.230719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.753 [2024-10-06 11:30:02.230742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.753 [2024-10-06 11:30:02.231145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.753 [2024-10-06 11:30:02.231313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.753 [2024-10-06 11:30:02.231321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.753 [2024-10-06 11:30:02.231327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.753 [2024-10-06 11:30:02.233920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.753 [2024-10-06 11:30:02.242923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.753 [2024-10-06 11:30:02.243343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.753 [2024-10-06 11:30:02.243360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.753 [2024-10-06 11:30:02.243370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.753 [2024-10-06 11:30:02.243536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.753 [2024-10-06 11:30:02.243703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.753 [2024-10-06 11:30:02.243711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.753 [2024-10-06 11:30:02.243717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.753 [2024-10-06 11:30:02.246316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.753 [2024-10-06 11:30:02.255753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.753 [2024-10-06 11:30:02.256149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.753 [2024-10-06 11:30:02.256195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.753 [2024-10-06 11:30:02.256218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.753 [2024-10-06 11:30:02.256795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.753 [2024-10-06 11:30:02.257010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.753 [2024-10-06 11:30:02.257017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.753 [2024-10-06 11:30:02.257023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.753 [2024-10-06 11:30:02.259634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.753 [2024-10-06 11:30:02.268479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.753 [2024-10-06 11:30:02.268877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.753 [2024-10-06 11:30:02.268893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.753 [2024-10-06 11:30:02.268900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.753 [2024-10-06 11:30:02.269072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.753 [2024-10-06 11:30:02.269239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.753 [2024-10-06 11:30:02.269247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.753 [2024-10-06 11:30:02.269253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.753 [2024-10-06 11:30:02.271848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.753 [2024-10-06 11:30:02.281283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.753 [2024-10-06 11:30:02.281734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.753 [2024-10-06 11:30:02.281750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.753 [2024-10-06 11:30:02.281757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.753 [2024-10-06 11:30:02.281923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.753 [2024-10-06 11:30:02.282095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.753 [2024-10-06 11:30:02.282103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.753 [2024-10-06 11:30:02.282113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.753 [2024-10-06 11:30:02.284705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.753 [2024-10-06 11:30:02.294005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.753 [2024-10-06 11:30:02.294451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.753 [2024-10-06 11:30:02.294467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.753 [2024-10-06 11:30:02.294474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.753 [2024-10-06 11:30:02.294632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.753 [2024-10-06 11:30:02.294789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.753 [2024-10-06 11:30:02.294796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.754 [2024-10-06 11:30:02.294802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.754 [2024-10-06 11:30:02.297400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.754 [2024-10-06 11:30:02.306833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.754 [2024-10-06 11:30:02.307260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.754 [2024-10-06 11:30:02.307276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.754 [2024-10-06 11:30:02.307282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.754 [2024-10-06 11:30:02.307440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.754 [2024-10-06 11:30:02.307597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.754 [2024-10-06 11:30:02.307604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.754 [2024-10-06 11:30:02.307610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.754 [2024-10-06 11:30:02.310224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.754 [2024-10-06 11:30:02.319638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.754 [2024-10-06 11:30:02.320098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.754 [2024-10-06 11:30:02.320113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:04.754 [2024-10-06 11:30:02.320120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:04.754 [2024-10-06 11:30:02.320278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:04.754 [2024-10-06 11:30:02.320436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.754 [2024-10-06 11:30:02.320443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.754 [2024-10-06 11:30:02.320449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.754 [2024-10-06 11:30:02.323114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.013 [2024-10-06 11:30:02.332515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.013 [2024-10-06 11:30:02.332993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.013 [2024-10-06 11:30:02.333036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.013 [2024-10-06 11:30:02.333073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.013 [2024-10-06 11:30:02.333654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.013 [2024-10-06 11:30:02.334244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.013 [2024-10-06 11:30:02.334278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.013 [2024-10-06 11:30:02.334284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.013 [2024-10-06 11:30:02.338041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.013 5700.60 IOPS, 22.27 MiB/s [2024-10-06 11:30:02.345269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.013 [2024-10-06 11:30:02.345744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.013 [2024-10-06 11:30:02.345789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.013 [2024-10-06 11:30:02.345812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.013 [2024-10-06 11:30:02.346327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.013 [2024-10-06 11:30:02.346496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.013 [2024-10-06 11:30:02.346504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.013 [2024-10-06 11:30:02.346511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.013 [2024-10-06 11:30:02.349111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.013 [2024-10-06 11:30:02.358035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.013 [2024-10-06 11:30:02.358420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.013 [2024-10-06 11:30:02.358466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.013 [2024-10-06 11:30:02.358489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.013 [2024-10-06 11:30:02.359086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.013 [2024-10-06 11:30:02.359293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.013 [2024-10-06 11:30:02.359301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.013 [2024-10-06 11:30:02.359307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.013 [2024-10-06 11:30:02.361899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.013 [2024-10-06 11:30:02.370743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.013 [2024-10-06 11:30:02.371087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.013 [2024-10-06 11:30:02.371103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.013 [2024-10-06 11:30:02.371110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.013 [2024-10-06 11:30:02.371271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.013 [2024-10-06 11:30:02.371428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.013 [2024-10-06 11:30:02.371436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.013 [2024-10-06 11:30:02.371442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.013 [2024-10-06 11:30:02.373956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.013 [2024-10-06 11:30:02.383537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.013 [2024-10-06 11:30:02.384015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.013 [2024-10-06 11:30:02.384031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.014 [2024-10-06 11:30:02.384038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.014 [2024-10-06 11:30:02.384211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.014 [2024-10-06 11:30:02.384378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.014 [2024-10-06 11:30:02.384386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.014 [2024-10-06 11:30:02.384392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.014 [2024-10-06 11:30:02.387139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.014 [2024-10-06 11:30:02.396524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.014 [2024-10-06 11:30:02.396890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.014 [2024-10-06 11:30:02.396906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.014 [2024-10-06 11:30:02.396915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.014 [2024-10-06 11:30:02.397093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.014 [2024-10-06 11:30:02.397265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.014 [2024-10-06 11:30:02.397274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.014 [2024-10-06 11:30:02.397281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.014 [2024-10-06 11:30:02.400009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.014 [2024-10-06 11:30:02.409508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.014 [2024-10-06 11:30:02.409762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.014 [2024-10-06 11:30:02.409778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.014 [2024-10-06 11:30:02.409786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.014 [2024-10-06 11:30:02.409956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.014 [2024-10-06 11:30:02.410128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.014 [2024-10-06 11:30:02.410138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.014 [2024-10-06 11:30:02.410150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.014 [2024-10-06 11:30:02.412807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.014 [2024-10-06 11:30:02.422537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.014 [2024-10-06 11:30:02.422918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.014 [2024-10-06 11:30:02.422934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.014 [2024-10-06 11:30:02.422941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.014 [2024-10-06 11:30:02.423113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.014 [2024-10-06 11:30:02.423280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.014 [2024-10-06 11:30:02.423288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.014 [2024-10-06 11:30:02.423294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.014 [2024-10-06 11:30:02.425927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.014 [2024-10-06 11:30:02.435373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.014 [2024-10-06 11:30:02.435830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.014 [2024-10-06 11:30:02.435878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.014 [2024-10-06 11:30:02.435902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.014 [2024-10-06 11:30:02.436457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.014 [2024-10-06 11:30:02.436624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.014 [2024-10-06 11:30:02.436632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.014 [2024-10-06 11:30:02.436638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.014 [2024-10-06 11:30:02.439237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.014 [2024-10-06 11:30:02.448089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.014 [2024-10-06 11:30:02.448465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.014 [2024-10-06 11:30:02.448481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.014 [2024-10-06 11:30:02.448488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.014 [2024-10-06 11:30:02.448655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.014 [2024-10-06 11:30:02.448821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.014 [2024-10-06 11:30:02.448829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.014 [2024-10-06 11:30:02.448835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.014 [2024-10-06 11:30:02.451439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.014 [2024-10-06 11:30:02.460876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.014 [2024-10-06 11:30:02.461325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.014 [2024-10-06 11:30:02.461344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.014 [2024-10-06 11:30:02.461352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.014 [2024-10-06 11:30:02.461518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.014 [2024-10-06 11:30:02.461685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.014 [2024-10-06 11:30:02.461693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.014 [2024-10-06 11:30:02.461699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.014 [2024-10-06 11:30:02.464298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.014 [2024-10-06 11:30:02.473676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.014 [2024-10-06 11:30:02.474159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.014 [2024-10-06 11:30:02.474204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.014 [2024-10-06 11:30:02.474227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.014 [2024-10-06 11:30:02.474767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.014 [2024-10-06 11:30:02.474934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.014 [2024-10-06 11:30:02.474942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.014 [2024-10-06 11:30:02.474948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.014 [2024-10-06 11:30:02.477546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.014 [2024-10-06 11:30:02.486486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.014 [2024-10-06 11:30:02.486957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.014 [2024-10-06 11:30:02.487000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.014 [2024-10-06 11:30:02.487023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.014 [2024-10-06 11:30:02.487563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.014 [2024-10-06 11:30:02.487730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.014 [2024-10-06 11:30:02.487738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.014 [2024-10-06 11:30:02.487744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.014 [2024-10-06 11:30:02.490342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.014 [2024-10-06 11:30:02.499198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.014 [2024-10-06 11:30:02.499685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.014 [2024-10-06 11:30:02.499728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.014 [2024-10-06 11:30:02.499751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.014 [2024-10-06 11:30:02.500344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.014 [2024-10-06 11:30:02.500750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.014 [2024-10-06 11:30:02.500758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.014 [2024-10-06 11:30:02.500764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.014 [2024-10-06 11:30:02.503363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.015 [2024-10-06 11:30:02.511953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.015 [2024-10-06 11:30:02.512445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.015 [2024-10-06 11:30:02.512491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.015 [2024-10-06 11:30:02.512514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.015 [2024-10-06 11:30:02.513108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.015 [2024-10-06 11:30:02.513432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.015 [2024-10-06 11:30:02.513444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.015 [2024-10-06 11:30:02.513454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.015 [2024-10-06 11:30:02.517874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.015 [2024-10-06 11:30:02.525801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.015 [2024-10-06 11:30:02.526275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.015 [2024-10-06 11:30:02.526293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.015 [2024-10-06 11:30:02.526301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.015 [2024-10-06 11:30:02.526483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.015 [2024-10-06 11:30:02.526666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.015 [2024-10-06 11:30:02.526674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.015 [2024-10-06 11:30:02.526681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.015 [2024-10-06 11:30:02.529591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.015 [2024-10-06 11:30:02.538772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.015 [2024-10-06 11:30:02.539230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.015 [2024-10-06 11:30:02.539247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.015 [2024-10-06 11:30:02.539254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.015 [2024-10-06 11:30:02.539421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.015 [2024-10-06 11:30:02.539587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.015 [2024-10-06 11:30:02.539594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.015 [2024-10-06 11:30:02.539601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.015 [2024-10-06 11:30:02.542317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.015 [2024-10-06 11:30:02.551627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.015 [2024-10-06 11:30:02.552098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.015 [2024-10-06 11:30:02.552115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.015 [2024-10-06 11:30:02.552122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.015 [2024-10-06 11:30:02.552296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.015 [2024-10-06 11:30:02.552455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.015 [2024-10-06 11:30:02.552462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.015 [2024-10-06 11:30:02.552468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.015 [2024-10-06 11:30:02.555047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.015 [2024-10-06 11:30:02.564342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.015 [2024-10-06 11:30:02.564791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.015 [2024-10-06 11:30:02.564806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.015 [2024-10-06 11:30:02.564812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.015 [2024-10-06 11:30:02.564970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.015 [2024-10-06 11:30:02.565152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.015 [2024-10-06 11:30:02.565161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.015 [2024-10-06 11:30:02.565167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.015 [2024-10-06 11:30:02.567760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.015 [2024-10-06 11:30:02.577050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.015 [2024-10-06 11:30:02.577455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.015 [2024-10-06 11:30:02.577470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.015 [2024-10-06 11:30:02.577477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.015 [2024-10-06 11:30:02.577642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.015 [2024-10-06 11:30:02.577808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.015 [2024-10-06 11:30:02.577816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.015 [2024-10-06 11:30:02.577822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.015 [2024-10-06 11:30:02.580403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.274 [2024-10-06 11:30:02.589932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.274 [2024-10-06 11:30:02.590412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.274 [2024-10-06 11:30:02.590428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.274 [2024-10-06 11:30:02.590439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.274 [2024-10-06 11:30:02.590605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.274 [2024-10-06 11:30:02.590775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.274 [2024-10-06 11:30:02.590783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.274 [2024-10-06 11:30:02.590789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.274 [2024-10-06 11:30:02.593452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.274 [2024-10-06 11:30:02.602770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.274 [2024-10-06 11:30:02.603237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.274 [2024-10-06 11:30:02.603254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.274 [2024-10-06 11:30:02.603261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.274 [2024-10-06 11:30:02.603428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.274 [2024-10-06 11:30:02.603595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.274 [2024-10-06 11:30:02.603602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.274 [2024-10-06 11:30:02.603608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.274 [2024-10-06 11:30:02.606208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.274 [2024-10-06 11:30:02.615672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.274 [2024-10-06 11:30:02.616171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.274 [2024-10-06 11:30:02.616216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.274 [2024-10-06 11:30:02.616240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.274 [2024-10-06 11:30:02.616762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.274 [2024-10-06 11:30:02.616933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.274 [2024-10-06 11:30:02.616941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.274 [2024-10-06 11:30:02.616948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.274 [2024-10-06 11:30:02.619657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.274 [2024-10-06 11:30:02.628671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.274 [2024-10-06 11:30:02.629167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.274 [2024-10-06 11:30:02.629212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.274 [2024-10-06 11:30:02.629235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.275 [2024-10-06 11:30:02.629796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.275 [2024-10-06 11:30:02.629963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.275 [2024-10-06 11:30:02.629974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.275 [2024-10-06 11:30:02.629980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.275 [2024-10-06 11:30:02.632670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.275 [2024-10-06 11:30:02.641441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.275 [2024-10-06 11:30:02.641910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.275 [2024-10-06 11:30:02.641926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.275 [2024-10-06 11:30:02.641934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.275 [2024-10-06 11:30:02.642113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.275 [2024-10-06 11:30:02.642285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.275 [2024-10-06 11:30:02.642294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.275 [2024-10-06 11:30:02.642299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.275 [2024-10-06 11:30:02.645054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.275 [2024-10-06 11:30:02.654520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.275 [2024-10-06 11:30:02.654983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.275 [2024-10-06 11:30:02.654999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.275 [2024-10-06 11:30:02.655006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.275 [2024-10-06 11:30:02.655183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.275 [2024-10-06 11:30:02.655355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.275 [2024-10-06 11:30:02.655363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.275 [2024-10-06 11:30:02.655370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.275 [2024-10-06 11:30:02.658098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.275 [2024-10-06 11:30:02.667247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.275 [2024-10-06 11:30:02.667718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.275 [2024-10-06 11:30:02.667733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.275 [2024-10-06 11:30:02.667740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.275 [2024-10-06 11:30:02.667907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.275 [2024-10-06 11:30:02.668081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.275 [2024-10-06 11:30:02.668089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.275 [2024-10-06 11:30:02.668095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.275 [2024-10-06 11:30:02.670686] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.275 [2024-10-06 11:30:02.679980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.275 [2024-10-06 11:30:02.680456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.275 [2024-10-06 11:30:02.680472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.275 [2024-10-06 11:30:02.680479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.275 [2024-10-06 11:30:02.680645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.275 [2024-10-06 11:30:02.680811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.275 [2024-10-06 11:30:02.680819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.275 [2024-10-06 11:30:02.680824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.275 [2024-10-06 11:30:02.683426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.275 [2024-10-06 11:30:02.692721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.275 [2024-10-06 11:30:02.693146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.275 [2024-10-06 11:30:02.693163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.275 [2024-10-06 11:30:02.693170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.275 [2024-10-06 11:30:02.693753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.275 [2024-10-06 11:30:02.694312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.275 [2024-10-06 11:30:02.694320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.275 [2024-10-06 11:30:02.694326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.275 [2024-10-06 11:30:02.696928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.275 [2024-10-06 11:30:02.705486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.275 [2024-10-06 11:30:02.705903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.275 [2024-10-06 11:30:02.705946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.275 [2024-10-06 11:30:02.705969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.275 [2024-10-06 11:30:02.706563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.275 [2024-10-06 11:30:02.707155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.275 [2024-10-06 11:30:02.707181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.275 [2024-10-06 11:30:02.707203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.275 [2024-10-06 11:30:02.709794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.275 [2024-10-06 11:30:02.718186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.275 [2024-10-06 11:30:02.718656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.275 [2024-10-06 11:30:02.718672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.275 [2024-10-06 11:30:02.718682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.275 [2024-10-06 11:30:02.718849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.275 [2024-10-06 11:30:02.719015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.275 [2024-10-06 11:30:02.719023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.275 [2024-10-06 11:30:02.719029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.275 [2024-10-06 11:30:02.721624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.275 [2024-10-06 11:30:02.730995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.275 [2024-10-06 11:30:02.731466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.275 [2024-10-06 11:30:02.731482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.275 [2024-10-06 11:30:02.731489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.275 [2024-10-06 11:30:02.731656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.275 [2024-10-06 11:30:02.731822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.275 [2024-10-06 11:30:02.731830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.275 [2024-10-06 11:30:02.731836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.275 [2024-10-06 11:30:02.734433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.275 [2024-10-06 11:30:02.743713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.275 [2024-10-06 11:30:02.744176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.275 [2024-10-06 11:30:02.744220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.275 [2024-10-06 11:30:02.744243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.275 [2024-10-06 11:30:02.744819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.275 [2024-10-06 11:30:02.745368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.275 [2024-10-06 11:30:02.745381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.275 [2024-10-06 11:30:02.745391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.275 [2024-10-06 11:30:02.749814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.275 [2024-10-06 11:30:02.757362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.275 [2024-10-06 11:30:02.757842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.275 [2024-10-06 11:30:02.757859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.276 [2024-10-06 11:30:02.757867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.276 [2024-10-06 11:30:02.758049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.276 [2024-10-06 11:30:02.758238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.276 [2024-10-06 11:30:02.758251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.276 [2024-10-06 11:30:02.758258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.276 [2024-10-06 11:30:02.761166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.276 [2024-10-06 11:30:02.770443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.276 [2024-10-06 11:30:02.770884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.276 [2024-10-06 11:30:02.770900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.276 [2024-10-06 11:30:02.770907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.276 [2024-10-06 11:30:02.771082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.276 [2024-10-06 11:30:02.771254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.276 [2024-10-06 11:30:02.771263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.276 [2024-10-06 11:30:02.771269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.276 [2024-10-06 11:30:02.773997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.276 [2024-10-06 11:30:02.783514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.276 [2024-10-06 11:30:02.783878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.276 [2024-10-06 11:30:02.783894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.276 [2024-10-06 11:30:02.783901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.276 [2024-10-06 11:30:02.784078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.276 [2024-10-06 11:30:02.784250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.276 [2024-10-06 11:30:02.784258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.276 [2024-10-06 11:30:02.784264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.276 [2024-10-06 11:30:02.786993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.276 [2024-10-06 11:30:02.796523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.276 [2024-10-06 11:30:02.796913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.276 [2024-10-06 11:30:02.796929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.276 [2024-10-06 11:30:02.796937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.276 [2024-10-06 11:30:02.797116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.276 [2024-10-06 11:30:02.797288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.276 [2024-10-06 11:30:02.797296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.276 [2024-10-06 11:30:02.797302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.276 [2024-10-06 11:30:02.800033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.276 [2024-10-06 11:30:02.809604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.276 [2024-10-06 11:30:02.810095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.276 [2024-10-06 11:30:02.810112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.276 [2024-10-06 11:30:02.810120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.276 [2024-10-06 11:30:02.810302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.276 [2024-10-06 11:30:02.810485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.276 [2024-10-06 11:30:02.810500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.276 [2024-10-06 11:30:02.810507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.276 [2024-10-06 11:30:02.813419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.276 [2024-10-06 11:30:02.822620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.276 [2024-10-06 11:30:02.823087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.276 [2024-10-06 11:30:02.823105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.276 [2024-10-06 11:30:02.823112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.276 [2024-10-06 11:30:02.823303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.276 [2024-10-06 11:30:02.823474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.276 [2024-10-06 11:30:02.823482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.276 [2024-10-06 11:30:02.823488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.276 [2024-10-06 11:30:02.826277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.276 [2024-10-06 11:30:02.835714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.276 [2024-10-06 11:30:02.836225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.276 [2024-10-06 11:30:02.836270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.276 [2024-10-06 11:30:02.836293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.276 [2024-10-06 11:30:02.836499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.276 [2024-10-06 11:30:02.836671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.276 [2024-10-06 11:30:02.836679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.276 [2024-10-06 11:30:02.836686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.276 [2024-10-06 11:30:02.839427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2265381 Killed "${NVMF_APP[@]}" "$@" 00:35:05.276 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:05.276 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:05.276 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:05.276 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:05.276 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:05.536 [2024-10-06 11:30:02.848794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.536 [2024-10-06 11:30:02.849208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.536 [2024-10-06 11:30:02.849225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.536 [2024-10-06 11:30:02.849233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.536 [2024-10-06 11:30:02.849406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.536 [2024-10-06 11:30:02.849580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.536 [2024-10-06 11:30:02.849588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.536 [2024-10-06 11:30:02.849594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.536 [2024-10-06 11:30:02.852345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.536 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=2266696 00:35:05.536 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 2266696 00:35:05.536 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:05.536 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2266696 ']' 00:35:05.536 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:05.536 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:05.536 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:05.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:05.536 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:05.536 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:05.536 [2024-10-06 11:30:02.861881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.536 [2024-10-06 11:30:02.862335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.536 [2024-10-06 11:30:02.862351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.536 [2024-10-06 11:30:02.862359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.536 [2024-10-06 11:30:02.862530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.536 [2024-10-06 11:30:02.862701] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.536 [2024-10-06 11:30:02.862711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.536 [2024-10-06 11:30:02.862717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.536 [2024-10-06 11:30:02.865455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.536 [2024-10-06 11:30:02.874867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.536 [2024-10-06 11:30:02.875281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.536 [2024-10-06 11:30:02.875298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.536 [2024-10-06 11:30:02.875305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.536 [2024-10-06 11:30:02.875481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.536 [2024-10-06 11:30:02.875652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.536 [2024-10-06 11:30:02.875661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.536 [2024-10-06 11:30:02.875667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.536 [2024-10-06 11:30:02.878383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.536 [2024-10-06 11:30:02.887876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.536 [2024-10-06 11:30:02.888315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.536 [2024-10-06 11:30:02.888332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.536 [2024-10-06 11:30:02.888340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.536 [2024-10-06 11:30:02.888508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.536 [2024-10-06 11:30:02.888675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.536 [2024-10-06 11:30:02.888682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.536 [2024-10-06 11:30:02.888689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.536 [2024-10-06 11:30:02.891407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.536 [2024-10-06 11:30:02.900753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.536 [2024-10-06 11:30:02.901162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.536 [2024-10-06 11:30:02.901179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.536 [2024-10-06 11:30:02.901187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.536 [2024-10-06 11:30:02.901358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.536 [2024-10-06 11:30:02.901529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.536 [2024-10-06 11:30:02.901538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.536 [2024-10-06 11:30:02.901544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.536 [2024-10-06 11:30:02.902499] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:35:05.536 [2024-10-06 11:30:02.902540] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:05.536 [2024-10-06 11:30:02.904283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.537 [2024-10-06 11:30:02.913847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.537 [2024-10-06 11:30:02.914233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.537 [2024-10-06 11:30:02.914267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.537 [2024-10-06 11:30:02.914276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.537 [2024-10-06 11:30:02.914498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.537 [2024-10-06 11:30:02.914700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.537 [2024-10-06 11:30:02.914708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.537 [2024-10-06 11:30:02.914715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.537 [2024-10-06 11:30:02.917613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.537 [2024-10-06 11:30:02.926946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.537 [2024-10-06 11:30:02.927325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.537 [2024-10-06 11:30:02.927344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.537 [2024-10-06 11:30:02.927351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.537 [2024-10-06 11:30:02.927522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.537 [2024-10-06 11:30:02.927694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.537 [2024-10-06 11:30:02.927702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.537 [2024-10-06 11:30:02.927709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.537 [2024-10-06 11:30:02.930447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.537 [2024-10-06 11:30:02.939979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.537 [2024-10-06 11:30:02.940353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.537 [2024-10-06 11:30:02.940370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.537 [2024-10-06 11:30:02.940378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.537 [2024-10-06 11:30:02.940549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.537 [2024-10-06 11:30:02.940724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.537 [2024-10-06 11:30:02.940733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.537 [2024-10-06 11:30:02.940740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.537 [2024-10-06 11:30:02.943472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.537 [2024-10-06 11:30:02.952997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.537 [2024-10-06 11:30:02.953422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.537 [2024-10-06 11:30:02.953439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.537 [2024-10-06 11:30:02.953447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.537 [2024-10-06 11:30:02.953619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.537 [2024-10-06 11:30:02.953791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.537 [2024-10-06 11:30:02.953799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.537 [2024-10-06 11:30:02.953810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.537 [2024-10-06 11:30:02.956548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.537 [2024-10-06 11:30:02.964375] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:05.537 [2024-10-06 11:30:02.965934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.537 [2024-10-06 11:30:02.966397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.537 [2024-10-06 11:30:02.966415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.537 [2024-10-06 11:30:02.966423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.537 [2024-10-06 11:30:02.966594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.537 [2024-10-06 11:30:02.966767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.537 [2024-10-06 11:30:02.966776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.537 [2024-10-06 11:30:02.966782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.537 [2024-10-06 11:30:02.969491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.537 [2024-10-06 11:30:02.978874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.537 [2024-10-06 11:30:02.979343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.537 [2024-10-06 11:30:02.979360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.537 [2024-10-06 11:30:02.979368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.537 [2024-10-06 11:30:02.979540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.537 [2024-10-06 11:30:02.979711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.537 [2024-10-06 11:30:02.979720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.537 [2024-10-06 11:30:02.979727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.537 [2024-10-06 11:30:02.982435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.537 [2024-10-06 11:30:02.991784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.537 [2024-10-06 11:30:02.992358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.537 [2024-10-06 11:30:02.992382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.537 [2024-10-06 11:30:02.992391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.537 [2024-10-06 11:30:02.992566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.537 [2024-10-06 11:30:02.992739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.537 [2024-10-06 11:30:02.992748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.537 [2024-10-06 11:30:02.992756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.537 [2024-10-06 11:30:02.995477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.537 [2024-10-06 11:30:03.003635] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:05.537 [2024-10-06 11:30:03.003670] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:05.537 [2024-10-06 11:30:03.003677] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:05.537 [2024-10-06 11:30:03.003683] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:05.537 [2024-10-06 11:30:03.003688] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:05.537 [2024-10-06 11:30:03.004536] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:35:05.537 [2024-10-06 11:30:03.004569] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:35:05.537 [2024-10-06 11:30:03.004571] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.537 [2024-10-06 11:30:03.004849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.537 [2024-10-06 11:30:03.005253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.537 [2024-10-06 11:30:03.005275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.537 [2024-10-06 11:30:03.005287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.537 [2024-10-06 11:30:03.005463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.537 [2024-10-06 11:30:03.005637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.537 [2024-10-06 11:30:03.005646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.537 [2024-10-06 11:30:03.005654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.537 [2024-10-06 11:30:03.008395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.537 [2024-10-06 11:30:03.017928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.537 [2024-10-06 11:30:03.018343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.537 [2024-10-06 11:30:03.018365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.537 [2024-10-06 11:30:03.018375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.537 [2024-10-06 11:30:03.018549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.537 [2024-10-06 11:30:03.018727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.537 [2024-10-06 11:30:03.018737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.537 [2024-10-06 11:30:03.018745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.537 [2024-10-06 11:30:03.021485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.538 [2024-10-06 11:30:03.031007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.538 [2024-10-06 11:30:03.031438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.538 [2024-10-06 11:30:03.031460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.538 [2024-10-06 11:30:03.031469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.538 [2024-10-06 11:30:03.031642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.538 [2024-10-06 11:30:03.031818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.538 [2024-10-06 11:30:03.031828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.538 [2024-10-06 11:30:03.031843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.538 [2024-10-06 11:30:03.034581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.538 [2024-10-06 11:30:03.044109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.538 [2024-10-06 11:30:03.044467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.538 [2024-10-06 11:30:03.044489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.538 [2024-10-06 11:30:03.044498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.538 [2024-10-06 11:30:03.044671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.538 [2024-10-06 11:30:03.044844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.538 [2024-10-06 11:30:03.044852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.538 [2024-10-06 11:30:03.044859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.538 [2024-10-06 11:30:03.047596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.538 [2024-10-06 11:30:03.057140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.538 [2024-10-06 11:30:03.057545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.538 [2024-10-06 11:30:03.057567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.538 [2024-10-06 11:30:03.057576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.538 [2024-10-06 11:30:03.057750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.538 [2024-10-06 11:30:03.057927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.538 [2024-10-06 11:30:03.057935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.538 [2024-10-06 11:30:03.057943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.538 [2024-10-06 11:30:03.060679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.538 [2024-10-06 11:30:03.070205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.538 [2024-10-06 11:30:03.070581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.538 [2024-10-06 11:30:03.070599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.538 [2024-10-06 11:30:03.070608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.538 [2024-10-06 11:30:03.070780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.538 [2024-10-06 11:30:03.070954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.538 [2024-10-06 11:30:03.070962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.538 [2024-10-06 11:30:03.070970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.538 [2024-10-06 11:30:03.073708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.538 [2024-10-06 11:30:03.083228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.538 [2024-10-06 11:30:03.083596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.538 [2024-10-06 11:30:03.083612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.538 [2024-10-06 11:30:03.083620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.538 [2024-10-06 11:30:03.083791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.538 [2024-10-06 11:30:03.083962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.538 [2024-10-06 11:30:03.083971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.538 [2024-10-06 11:30:03.083977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.538 [2024-10-06 11:30:03.086722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.538 [2024-10-06 11:30:03.096252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.538 [2024-10-06 11:30:03.096559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.538 [2024-10-06 11:30:03.096575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.538 [2024-10-06 11:30:03.096582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.538 [2024-10-06 11:30:03.096754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.538 [2024-10-06 11:30:03.096925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.538 [2024-10-06 11:30:03.096934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.538 [2024-10-06 11:30:03.096941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.538 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:05.538 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:05.538 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:05.538 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:05.538 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:05.538 [2024-10-06 11:30:03.099676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.538 [2024-10-06 11:30:03.109200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.798 [2024-10-06 11:30:03.109575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.798 [2024-10-06 11:30:03.109593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.798 [2024-10-06 11:30:03.109601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.798 [2024-10-06 11:30:03.109772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.798 [2024-10-06 11:30:03.109946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.798 [2024-10-06 11:30:03.109956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.798 [2024-10-06 11:30:03.109963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.798 [2024-10-06 11:30:03.112700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.798 [2024-10-06 11:30:03.122223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.798 [2024-10-06 11:30:03.122607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.798 [2024-10-06 11:30:03.122623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.798 [2024-10-06 11:30:03.122631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.798 [2024-10-06 11:30:03.122802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.798 [2024-10-06 11:30:03.122974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.798 [2024-10-06 11:30:03.122982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.798 [2024-10-06 11:30:03.122988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.798 [2024-10-06 11:30:03.125726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.798 [2024-10-06 11:30:03.135244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.798 [2024-10-06 11:30:03.135569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.798 [2024-10-06 11:30:03.135586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.798 [2024-10-06 11:30:03.135593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.798 [2024-10-06 11:30:03.135764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.798 [2024-10-06 11:30:03.135936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.798 [2024-10-06 11:30:03.135944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.798 [2024-10-06 11:30:03.135951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.798 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:05.798 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:05.798 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.798 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:05.798 [2024-10-06 11:30:03.138686] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.798 [2024-10-06 11:30:03.143051] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:05.798 [2024-10-06 11:30:03.148205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.798 [2024-10-06 11:30:03.148527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.799 [2024-10-06 11:30:03.148544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.799 [2024-10-06 11:30:03.148551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.799 [2024-10-06 11:30:03.148722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.799 [2024-10-06 11:30:03.148894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.799 [2024-10-06 11:30:03.148902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.799 [2024-10-06 11:30:03.148908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.799 [2024-10-06 11:30:03.151646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.799 [2024-10-06 11:30:03.161172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.799 [2024-10-06 11:30:03.161615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.799 [2024-10-06 11:30:03.161632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.799 [2024-10-06 11:30:03.161640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.799 [2024-10-06 11:30:03.161811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.799 [2024-10-06 11:30:03.161983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.799 [2024-10-06 11:30:03.161991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.799 [2024-10-06 11:30:03.161997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.799 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.799 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:05.799 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.799 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:05.799 [2024-10-06 11:30:03.164785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.799 [2024-10-06 11:30:03.174220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.799 [2024-10-06 11:30:03.174624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.799 [2024-10-06 11:30:03.174641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.799 [2024-10-06 11:30:03.174648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.799 [2024-10-06 11:30:03.174820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.799 [2024-10-06 11:30:03.174991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.799 [2024-10-06 11:30:03.174999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.799 [2024-10-06 11:30:03.175005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.799 [2024-10-06 11:30:03.177742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.799 Malloc0 00:35:05.799 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.799 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:05.799 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.799 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:05.799 [2024-10-06 11:30:03.187282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.799 [2024-10-06 11:30:03.187674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.799 [2024-10-06 11:30:03.187691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.799 [2024-10-06 11:30:03.187699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.799 [2024-10-06 11:30:03.187871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.799 [2024-10-06 11:30:03.188042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.799 [2024-10-06 11:30:03.188064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.799 [2024-10-06 11:30:03.188071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.799 [2024-10-06 11:30:03.190798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.799 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.799 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:05.799 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.799 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:05.799 [2024-10-06 11:30:03.200329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.799 [2024-10-06 11:30:03.200782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.799 [2024-10-06 11:30:03.200798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2373f70 with addr=10.0.0.2, port=4420 00:35:05.799 [2024-10-06 11:30:03.200806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2373f70 is same with the state(6) to be set 00:35:05.799 [2024-10-06 11:30:03.200977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373f70 (9): Bad file descriptor 00:35:05.799 [2024-10-06 11:30:03.201153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.799 [2024-10-06 11:30:03.201162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.799 [2024-10-06 11:30:03.201169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.799 [2024-10-06 11:30:03.203901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.799 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.799 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:05.799 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.799 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:05.799 [2024-10-06 11:30:03.208373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.799 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.799 11:30:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2265639 00:35:05.799 [2024-10-06 11:30:03.213415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.799 [2024-10-06 11:30:03.241057] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:14.821 4933.50 IOPS, 19.27 MiB/s 5808.14 IOPS, 22.69 MiB/s 6467.12 IOPS, 25.26 MiB/s 6984.67 IOPS, 27.28 MiB/s 7401.50 IOPS, 28.91 MiB/s 7761.91 IOPS, 30.32 MiB/s 8050.00 IOPS, 31.45 MiB/s 8282.31 IOPS, 32.35 MiB/s 8493.29 IOPS, 33.18 MiB/s 00:35:14.821 Latency(us) 00:35:14.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.821 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:14.821 Verification LBA range: start 0x0 length 0x4000 00:35:14.821 Nvme1n1 : 15.00 8663.99 33.84 10794.89 0.00 6558.41 635.86 12483.05 00:35:14.821 =================================================================================================================== 00:35:14.821 Total : 8663.99 33.84 10794.89 0.00 6558.41 635.86 12483.05 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:15.080 rmmod nvme_tcp 00:35:15.080 rmmod nvme_fabrics 00:35:15.080 rmmod nvme_keyring 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 2266696 ']' 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 2266696 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2266696 ']' 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2266696 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:15.080 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2266696 00:35:15.340 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:15.340 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:15.340 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2266696' 00:35:15.340 killing process with pid 2266696 00:35:15.340 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2266696 00:35:15.340 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2266696 00:35:15.340 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:15.340 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:15.340 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:15.340 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:15.340 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:35:15.341 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:15.341 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:35:15.341 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:15.341 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:15.341 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.341 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:15.341 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.877 11:30:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:17.878 00:35:17.878 real 0m25.634s 00:35:17.878 user 1m1.114s 00:35:17.878 sys 0m6.359s 00:35:17.878 11:30:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:17.878 11:30:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:17.878 ************************************ 00:35:17.878 END TEST nvmf_bdevperf 00:35:17.878 ************************************ 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.878 ************************************ 00:35:17.878 START TEST nvmf_target_disconnect 00:35:17.878 ************************************ 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:17.878 * Looking for test storage... 00:35:17.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:17.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.878 --rc genhtml_branch_coverage=1 00:35:17.878 --rc genhtml_function_coverage=1 00:35:17.878 --rc genhtml_legend=1 00:35:17.878 --rc geninfo_all_blocks=1 00:35:17.878 --rc geninfo_unexecuted_blocks=1 00:35:17.878 00:35:17.878 ' 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:17.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.878 --rc genhtml_branch_coverage=1 00:35:17.878 --rc genhtml_function_coverage=1 00:35:17.878 --rc genhtml_legend=1 00:35:17.878 --rc geninfo_all_blocks=1 00:35:17.878 --rc geninfo_unexecuted_blocks=1 00:35:17.878 00:35:17.878 ' 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:17.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.878 --rc genhtml_branch_coverage=1 00:35:17.878 --rc genhtml_function_coverage=1 00:35:17.878 --rc genhtml_legend=1 00:35:17.878 --rc geninfo_all_blocks=1 00:35:17.878 --rc geninfo_unexecuted_blocks=1 00:35:17.878 00:35:17.878 ' 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:17.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.878 --rc genhtml_branch_coverage=1 00:35:17.878 --rc genhtml_function_coverage=1 00:35:17.878 --rc genhtml_legend=1 00:35:17.878 --rc geninfo_all_blocks=1 00:35:17.878 --rc geninfo_unexecuted_blocks=1 00:35:17.878 00:35:17.878 ' 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.878 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:17.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:17.879 11:30:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:23.154 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:23.154 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:35:23.154 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:23.154 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:23.154 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:23.154 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:23.154 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:23.154 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:23.155 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.155 11:30:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:23.155 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:23.155 Found net devices under 0000:af:00.0: cvl_0_0 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:23.155 Found net devices under 0000:af:00.1: cvl_0_1 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:23.155 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:23.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:23.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:35:23.155 00:35:23.155 --- 10.0.0.2 ping statistics --- 00:35:23.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.155 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:23.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:23.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:35:23.156 00:35:23.156 --- 10.0.0.1 ping statistics --- 00:35:23.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.156 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:23.156 ************************************ 00:35:23.156 START TEST nvmf_target_disconnect_tc1 00:35:23.156 ************************************ 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:23.156 [2024-10-06 11:30:20.392154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.156 [2024-10-06 11:30:20.392259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b3640 with addr=10.0.0.2, port=4420 00:35:23.156 [2024-10-06 11:30:20.392311] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:23.156 [2024-10-06 11:30:20.392338] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:23.156 [2024-10-06 11:30:20.392357] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:23.156 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:23.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:23.156 Initializing NVMe Controllers 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:23.156 00:35:23.156 real 0m0.095s 00:35:23.156 user 0m0.041s 00:35:23.156 sys 0m0.053s 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:23.156 ************************************ 00:35:23.156 END TEST nvmf_target_disconnect_tc1 00:35:23.156 ************************************ 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:23.156 ************************************ 00:35:23.156 START TEST nvmf_target_disconnect_tc2 00:35:23.156 ************************************ 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2272113 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2272113 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2272113 ']' 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:23.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:23.156 [2024-10-06 11:30:20.519458] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:35:23.156 [2024-10-06 11:30:20.519499] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:23.156 [2024-10-06 11:30:20.586926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:23.156 [2024-10-06 11:30:20.625219] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:23.156 [2024-10-06 11:30:20.625260] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:23.156 [2024-10-06 11:30:20.625267] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:23.156 [2024-10-06 11:30:20.625272] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:23.156 [2024-10-06 11:30:20.625277] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:23.156 [2024-10-06 11:30:20.626861] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:35:23.156 [2024-10-06 11:30:20.626967] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:35:23.156 [2024-10-06 11:30:20.627096] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:35:23.156 [2024-10-06 11:30:20.627097] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:23.156 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:23.416 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:23.416 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:23.416 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.416 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:23.416 Malloc0 00:35:23.416 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.416 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:23.416 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.416 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:23.416 [2024-10-06 11:30:20.782765] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:23.416 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:23.417 [2024-10-06 11:30:20.811007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2272135 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:23.417 11:30:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:25.325 11:30:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2272113 00:35:25.325 11:30:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 [2024-10-06 11:30:22.837519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Read completed with error (sct=0, sc=8) 00:35:25.325 starting I/O failed 00:35:25.325 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 [2024-10-06 11:30:22.837717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 [2024-10-06 11:30:22.837909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Read completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 Write completed with error (sct=0, sc=8) 00:35:25.326 starting I/O failed 00:35:25.326 [2024-10-06 11:30:22.838102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:25.326 [2024-10-06 11:30:22.838381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.326 [2024-10-06 11:30:22.838399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.326 qpair failed and we were unable to recover it. 00:35:25.326 [2024-10-06 11:30:22.838653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.326 [2024-10-06 11:30:22.838668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.326 qpair failed and we were unable to recover it. 00:35:25.326 [2024-10-06 11:30:22.838857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.326 [2024-10-06 11:30:22.838868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.326 qpair failed and we were unable to recover it. 00:35:25.326 [2024-10-06 11:30:22.839085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.326 [2024-10-06 11:30:22.839098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.326 qpair failed and we were unable to recover it. 00:35:25.326 [2024-10-06 11:30:22.839241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.326 [2024-10-06 11:30:22.839252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.326 qpair failed and we were unable to recover it. 00:35:25.326 [2024-10-06 11:30:22.839442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.326 [2024-10-06 11:30:22.839474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.326 qpair failed and we were unable to recover it. 00:35:25.326 [2024-10-06 11:30:22.839645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.326 [2024-10-06 11:30:22.839677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.326 qpair failed and we were unable to recover it. 00:35:25.326 [2024-10-06 11:30:22.839845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.326 [2024-10-06 11:30:22.839877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.326 qpair failed and we were unable to recover it. 00:35:25.326 [2024-10-06 11:30:22.840106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.326 [2024-10-06 11:30:22.840141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.326 qpair failed and we were unable to recover it. 00:35:25.326 [2024-10-06 11:30:22.840327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.326 [2024-10-06 11:30:22.840360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.326 qpair failed and we were unable to recover it. 00:35:25.326 [2024-10-06 11:30:22.840589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.326 [2024-10-06 11:30:22.840630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.326 qpair failed and we were unable to recover it. 00:35:25.326 [2024-10-06 11:30:22.840792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.326 [2024-10-06 11:30:22.840825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.840977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.841008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.841201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.841213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.841411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.841444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.841662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.841695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.841922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.841954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.842120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.842132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.842254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.842266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.842393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.842405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.842525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.842556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.842728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.842761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.842927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.842960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.843125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.843163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.843354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.843366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.843557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.843572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.843691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.843704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.844707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.844739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.845042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.845069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.845206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.845224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.845413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.845430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.845564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.845582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.845766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.845783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.845905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.845922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.846118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.846137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.846255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.846272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.846407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.846423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.846537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.846554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.846757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.846790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.846949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.846966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.847112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.847130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.847276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.847294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.847508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.847528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.847710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.847727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.847850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.847867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.327 qpair failed and we were unable to recover it. 00:35:25.327 [2024-10-06 11:30:22.848025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.327 [2024-10-06 11:30:22.848054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.848204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.848217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.848323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.848353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.848565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.848599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.848850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.848881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.849163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.849176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.849274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.849286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.849397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.849408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.849590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.849602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.849755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.849767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.849877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.849889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.849999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.850011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.850120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.850131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.850252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.850263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.850368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.850380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.850555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.850567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.850705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.850718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.850901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.850935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.851108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.851142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.852388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.852409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.852600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.852612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.852822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.852854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.853016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.853048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.853303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.853336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.853490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.853512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.853695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.853728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.853978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.854011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.854255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.854289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.854443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.854476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.854643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.854676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.855592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.855612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.855838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.855850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.856035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.856047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.856236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.856249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.856423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.856434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.856692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.856725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.856883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.328 [2024-10-06 11:30:22.856917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.328 qpair failed and we were unable to recover it. 00:35:25.328 [2024-10-06 11:30:22.857083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.857126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.857347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.857380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.857547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.857579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.857793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.857825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.857975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.857988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.858234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.858269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.858489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.858522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.858665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.858699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.858912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.858945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.859173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.859207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.859501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.859513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.859691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.859724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.860028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.860069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.860349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.860361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.860539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.860551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.860682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.860695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.860885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.860918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.861224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.861258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.861431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.861465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.861647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.861680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.861841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.861874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.862022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.862056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.862235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.862269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.862500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.862533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.862700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.862733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.863038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.863098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.863328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.863361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.863522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.863556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.864843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.864866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.865088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.865124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.865355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.865389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.866611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.866633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.866809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.866822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.867011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.867044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.867425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.867460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.867625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.867657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.329 qpair failed and we were unable to recover it. 00:35:25.329 [2024-10-06 11:30:22.867836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.329 [2024-10-06 11:30:22.867869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.868045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.868088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.868309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.868341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.868478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.868491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.868659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.868701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.868869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.868902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.869143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.869177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.869341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.869373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.869599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.869632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.869850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.869883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.870056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.870099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.870287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.870320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.870544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.870577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.870750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.870782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.871036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.871078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.871293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.871326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.871479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.871490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.871729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.871762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.871942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.871975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.872135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.872173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.872303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.872314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.872480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.872512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.872815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.872847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.873081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.873115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.873279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.873311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.873473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.330 [2024-10-06 11:30:22.873506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.330 qpair failed and we were unable to recover it. 00:35:25.330 [2024-10-06 11:30:22.873679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.873712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.873943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.873975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.874189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.874201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.874307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.874319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.874436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.874449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.874589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.874622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.874854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.874887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.875112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.875146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.875295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.875307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.875549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.875561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.875722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.875734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.875946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.875980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.876142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.876155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.876341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.876375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.876530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.876563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.876793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.876826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.877103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.877137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.877303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.877336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.877489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.877528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.877674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.877708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.877912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.877945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.878127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.878163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.878316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.878349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.879198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.879222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.879483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.879496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.879664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.879676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.879853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.879887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.880102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.880114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.880217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.880228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.880320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.880330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.880444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.880455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.880561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.880573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.880696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.880708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.880885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.880896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.881019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.881052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.331 qpair failed and we were unable to recover it. 00:35:25.331 [2024-10-06 11:30:22.881226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.331 [2024-10-06 11:30:22.881260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.881410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.881444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.881720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.881753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.881901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.881935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.882145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.882180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.882393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.882404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.882644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.882677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.882830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.882863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.883008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.883042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.883237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.883250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.883429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.883441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.883536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.883547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.883690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.883737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.883973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.884006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.884160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.884194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.884343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.884355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.884469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.884500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.884736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.884769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.884982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.885026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.885199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.885211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.886436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.886459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.886663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.886676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.886861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.886895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.887036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.887092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.887323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.887370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.887546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.887558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.887726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.887739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.887949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.887982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.888142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.888178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.888321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.888354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.888571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.888603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.888767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.888800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.888920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.888953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.889124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.889136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.889327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.332 [2024-10-06 11:30:22.889359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.332 qpair failed and we were unable to recover it. 00:35:25.332 [2024-10-06 11:30:22.889513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.889545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.889773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.889806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.889982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.890016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.890181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.890196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.890375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.890406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.890553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.890586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.890765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.890797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.890953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.890986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.891131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.891166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.891318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.891351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.891513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.891545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.891701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.891734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.891952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.891985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.892158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.892193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.892355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.892367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.892576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.892611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.892825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.892858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.893029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.893072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.893286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.893319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.893543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.893575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.893730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.893762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.893922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.893954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.894167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.894201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.894381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.894419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.894543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.894555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.894698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.894730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.894890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.894923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.895183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.895216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.895374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.895388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.895511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.895523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.333 [2024-10-06 11:30:22.895656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.333 [2024-10-06 11:30:22.895667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.333 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.895772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.895812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.896042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.896088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.896326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.896359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.896528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.896561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.896706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.896738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.896907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.896940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.897106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.897135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.897296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.897308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.897416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.897428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.897689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.897701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.897799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.897811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.897996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.898008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.899355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.899375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.899589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.899602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.899815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.899849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.900127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.900163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.900321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.900333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.900488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.900500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.900624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.900636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.900780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.900814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.900985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.901017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.901193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.901206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.901325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.901356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.901574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.617 [2024-10-06 11:30:22.901607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.617 qpair failed and we were unable to recover it. 00:35:25.617 [2024-10-06 11:30:22.901916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.901950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.902183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.902218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.902451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.902483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.902703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.902736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.902965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.902998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.903211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.903224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.903348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.903386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.903606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.903639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.903854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.903887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.904113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.904148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.904460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.904495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.904709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.904721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.904892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.904904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.905027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.905041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.905278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.905291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.905471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.905483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.905605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.905616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.905791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.905803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.905921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.905933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.906098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.906111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.906226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.906239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.906368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.906380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.906494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.906507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.906612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.906624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.906755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.906767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.906881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.906892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.907008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.907019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.907196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.907209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.907280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.907290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.907388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.907399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.907525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.907537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.907702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.907714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.907826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.618 [2024-10-06 11:30:22.907837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.618 qpair failed and we were unable to recover it. 00:35:25.618 [2024-10-06 11:30:22.908004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.908016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.908190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.908204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.908339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.908352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.908515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.908527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.908638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.908651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.908771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.908783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.908893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.908905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.909010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.909022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.909190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.909201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.909303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.909314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.909476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.909488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.909604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.909616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.909805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.909817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.909995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.910007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.910188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.910201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.910306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.910317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.910430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.910443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.910614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.910627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.910755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.910768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.910862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.910873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.911053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.911072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.911176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.911188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.911291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.911303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.911371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.911381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.911551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.911563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.911718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.911730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.911855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.911867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.911979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.911991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.912100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.912111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.912207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.912218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.912329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.912341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.912451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.912463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.619 qpair failed and we were unable to recover it. 00:35:25.619 [2024-10-06 11:30:22.912628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.619 [2024-10-06 11:30:22.912640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.912812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.912824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.912930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.912942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.913113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.913125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.913303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.913315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.913482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.913496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.913680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.913692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.913804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.913814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.913925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.913937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.914021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.914032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.914127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.914138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.914322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.914334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.914429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.914440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.914657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.914670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.914903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.914916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.915073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.915120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.915321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.915340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.915461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.915478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.915620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.915638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.915764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.915782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.915921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.915939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.916133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.916152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.916280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.916298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.916477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.916494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.916615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.916632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.916900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.916918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.917029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.917047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.917181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.917199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.917437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.917459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.917596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.917614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.620 [2024-10-06 11:30:22.917739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.620 [2024-10-06 11:30:22.917757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.620 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.917985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.918003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.918207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.918226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.918371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.918388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.918652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.918670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.918848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.918868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.919065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.919083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.919240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.919258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.919390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.919407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.919537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.919555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.919687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.919705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.919843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.919861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.919993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.920011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.920222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.920242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.920466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.920484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.920593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.920611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.920732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.920749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.920929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.920946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.921035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.921053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.921167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.921185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.921362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.921379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.921576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.921594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.921709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.921726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.921921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.921938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.922046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.922068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.922250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.922267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.922449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.922468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.922646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.922664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.922787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.922805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.923026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.923043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.923172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.923187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.923304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.923317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.923438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.923451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.621 [2024-10-06 11:30:22.923687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-10-06 11:30:22.923699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.621 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.923904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.923917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.924081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.924094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.924277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.924290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.924453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.924466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.924657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.924672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.924801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.924814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.924922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.924935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.925039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.925050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.925196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.925209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.925395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.925408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.925583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.925595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.925708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.925720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.925901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.925915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.926018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.926031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.926204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.926217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.926451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.926464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.926568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.926581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.926798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.926811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.927093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.927105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.927218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.927231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.927435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.927447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.927634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.927646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.927883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.927895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.928130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.928144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.928321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.928334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.928438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.928450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.928651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.928664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.928769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.622 [2024-10-06 11:30:22.928782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.622 qpair failed and we were unable to recover it. 00:35:25.622 [2024-10-06 11:30:22.928902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.928915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.929155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.929168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.929346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.929360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.929474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.929489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.929656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.929670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.929880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.929893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.930074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.930088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.930217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.930229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.930329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.930340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.931038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.931070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.931280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.931294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.931528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.931540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.931794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.931806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.931925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.931938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.932110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.932124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.932263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.932276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.932397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.932412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.932534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.932548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.932653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.932667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.932859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.932871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.933084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.933100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.933269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.933282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.933529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.933542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.933723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.933736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.933943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.933957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.934199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.934213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.934350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.934363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.934568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.934581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.934749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.934762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.934938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.934950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.935100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.935112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.935222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.935234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.935343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.935355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.935465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.935478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.935646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.623 [2024-10-06 11:30:22.935658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.623 qpair failed and we were unable to recover it. 00:35:25.623 [2024-10-06 11:30:22.935825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.935837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.935957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.935970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.936182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.936196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.936368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.936381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.936490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.936503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.936617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.936629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.936733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.936745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.936842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.936854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.937049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.937067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.937258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.937270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.937386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.937399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.937484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.937496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.937603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.937616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.937725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.937737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.937927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.937940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.938053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.938072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.938182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.938193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.938359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.938372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.938482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.938495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.938611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.938624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.938728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.938740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.938849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.938862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.939024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.939036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.939139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.939152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.939336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.939348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.939489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.939500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.939611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.939624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.939746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.939758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.939873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.939886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.940007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.940018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.940120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.940134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.940343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.940356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.940467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.940480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.940639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.940651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.940768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.940780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.624 qpair failed and we were unable to recover it. 00:35:25.624 [2024-10-06 11:30:22.940892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.624 [2024-10-06 11:30:22.940906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.941029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.941042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.941165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.941178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.941357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.941370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.941544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.941555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.941659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.941672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.941857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.941870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.942039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.942052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.942174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.942187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.942310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.942323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.942448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.942461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.942579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.942594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.942827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.942839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.943003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.943024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.943209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.943221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.943336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.943349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.943520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.943532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.943642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.943654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.943788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.943801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.943900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.943913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.944105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.944118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.944217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.944229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.944337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.944349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.944467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.944479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.944589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.944602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.944764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.944776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.944882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.944895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.944990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.945001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.945118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.945130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.945227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.945238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.945413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.945426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.945506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.945516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.945751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.945764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.945875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.945888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.945999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.625 [2024-10-06 11:30:22.946011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.625 qpair failed and we were unable to recover it. 00:35:25.625 [2024-10-06 11:30:22.946126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.946140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.946236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.946248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.946417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.946431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.946642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.946655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.946773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.946785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.946914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.946926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.947018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.947029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.947166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.947181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.947297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.947308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.947423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.947435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.947540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.947551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.947791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.947804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.947981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.947994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.948105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.948117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.948289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.948302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.948400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.948410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.948515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.948526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.948632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.948642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.948807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.948822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.948988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.949000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.949113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.949127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.949222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.949234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.949335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.949347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.949548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.949563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.949727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.949740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.949899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.949911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.950041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.950054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.950179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.950192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.950300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.950312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.950430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.950443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.950613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.950626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.950726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.950738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.950901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.950913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.951035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.951047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.951153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.951164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.626 [2024-10-06 11:30:22.951279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.626 [2024-10-06 11:30:22.951291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.626 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.951571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.951584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.951751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.951764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.951861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.951873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.951970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.951983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.952180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.952194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.952316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.952329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.952452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.952465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.952630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.952643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.952743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.952754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.952931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.952945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.953070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.953084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.953233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.953244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.953357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.953370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.953605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.953618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.953719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.953730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.953981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.953993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.954174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.954187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.954375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.954389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.954500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.954512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.954682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.954695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.954810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.954821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.955012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.955025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.955111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.955125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.955244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.955257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.955434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.955447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.955627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.955640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.955818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.955831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.955938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.955950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.956077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.956090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.627 [2024-10-06 11:30:22.956267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.627 [2024-10-06 11:30:22.956280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.627 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.956444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.956457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.956704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.956717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.956904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.956916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.957029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.957042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.957178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.957191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.957384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.957397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.957501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.957514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.957718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.957731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.957905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.957918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.958024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.958035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.958152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.958163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.958266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.958279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.958495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.958508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.958685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.958697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.958860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.958873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.958994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.959006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.959109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.959120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.959231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.959243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.959488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.959501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.959673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.959686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.959889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.959901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.960005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.960017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.960188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.960201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.960308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.960321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.960497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.960510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.960628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.960641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.960776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.960789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.960939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.960951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.961064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.961077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.961182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.961195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.961367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.961380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.961497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.961510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.961611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.961626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.961802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.961815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.962002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.962015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.628 qpair failed and we were unable to recover it. 00:35:25.628 [2024-10-06 11:30:22.962121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.628 [2024-10-06 11:30:22.962134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.962389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.962402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.962589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.962601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.962700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.962712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.962822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.962834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.962949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.962962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.963056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.963072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.963262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.963275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.963441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.963453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.963661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.963674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.963849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.963862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.964041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.964054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.964142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.964153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.964358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.964371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.964549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.964561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.964732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.964744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.964917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.964931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.965115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.965130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.965248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.965272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.965464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.965478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.965596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.965609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.965848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.965861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.966018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.966031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.966237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.966250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.966388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.966402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.966585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.966597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.966711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.966723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.966903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.966917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.967038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.967051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.967135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.967147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.967316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.967329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.967507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.967521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.967756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.967769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.967871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.967884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.967996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.968009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.968179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.968193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.629 [2024-10-06 11:30:22.968379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.629 [2024-10-06 11:30:22.968392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.629 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.968512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.968527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.968695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.968708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.968808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.968821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.969004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.969017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.969253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.969267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.969501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.969514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.969774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.969788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.970023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.970037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.970152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.970165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.970337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.970351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.970464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.970478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.970660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.970672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.970794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.970807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.970923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.970937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.971118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.971132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.971368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.971382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.971590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.971602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.971709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.971722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.971891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.971904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.972015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.972027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.972216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.972244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.972360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.972373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.972473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.972486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.972682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.972695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.972803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.972816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.973012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.973025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.973133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.973146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.973278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.973292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.973503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.973516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.973629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.973643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.973842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.973856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.974040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.974053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.974245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.974259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.974430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.974444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.974633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.630 [2024-10-06 11:30:22.974646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.630 qpair failed and we were unable to recover it. 00:35:25.630 [2024-10-06 11:30:22.974884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.974897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.975007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.975020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.975122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.975135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.975247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.975261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.975370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.975382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.975475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.975491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.975665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.975679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.975786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.975799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.975920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.975933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.976129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.976143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.976253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.976266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.976386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.976398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.976509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.976522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.976638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.976651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.976892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.976905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.976998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.977011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.977116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.977130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.977294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.977308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.977490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.977503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.977689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.977702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.977877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.977890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.977991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.978005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.978246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.978259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.978438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.978450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.978559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.978572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.978739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.978751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.978924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.978937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.979055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.979070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.979243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.979256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.979434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.979446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.979677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.979690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.979810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.979823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.980002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.980015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.980259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.980272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.980472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.631 [2024-10-06 11:30:22.980484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.631 qpair failed and we were unable to recover it. 00:35:25.631 [2024-10-06 11:30:22.980592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.980605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.980717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.980729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.980860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.980874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.980978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.980990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.981174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.981187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.981349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.981362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.981540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.981553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.981663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.981676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.981909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.981922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.982105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.982119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.982319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.982333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.982531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.982543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.982739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.982751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.982824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.982835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.982963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.982975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.983151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.983164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.983309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.983321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.983416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.983428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.983608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.983620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.983748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.983761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.983946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.983958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.984142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.984154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.984413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.984425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.984601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.984613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.984800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.984814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.985053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.985069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.985268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.985280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.985423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.985435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.985555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.985568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.985731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.985743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.985855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.985867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.986049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.632 [2024-10-06 11:30:22.986066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.632 qpair failed and we were unable to recover it. 00:35:25.632 [2024-10-06 11:30:22.986234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.986246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.986422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.986434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.986562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.986574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.986696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.986709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.986822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.986834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.987097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.987110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.987227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.987240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.987363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.987375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.987559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.987572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.987681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.987693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.987794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.987807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.987982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.987996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.988169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.988182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.988295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.988307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.988486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.988498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.988676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.988688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.988785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.988798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.988911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.988923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.989108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.989123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.989359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.989371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.989485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.989498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.989703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.989717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.989900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.989912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.990082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.990096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.990290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.990302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.990488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.990500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.990664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.990676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.990859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.990871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.991109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.991122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.991217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.991229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.991426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.991439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.991574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.991586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.991709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.991721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.633 [2024-10-06 11:30:22.991845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.633 [2024-10-06 11:30:22.991857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.633 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.992024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.992036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.992139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.992151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.992316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.992329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.992449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.992462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.992703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.992715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.992827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.992838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.993099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.993112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.993331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.993343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.993453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.993464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.993587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.993599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.993721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.993734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.993929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.993941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.994127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.994139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.994348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.994361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.994546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.994558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.994734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.994746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.994892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.994904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.994982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.994994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.995105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.995117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.995294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.995308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.995494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.995506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.995681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.995694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.995865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.995878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.996042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.996054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.996164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.996179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.996297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.996310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.996477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.996489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.996601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.996614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.996728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.996740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.996997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.997010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.997266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.997278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.997398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.997410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.997526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.997537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.997791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.997803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.634 [2024-10-06 11:30:22.998047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.634 [2024-10-06 11:30:22.998064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.634 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:22.998162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:22.998175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:22.998287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:22.998300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:22.998411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:22.998423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:22.998522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:22.998534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:22.998739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:22.998752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:22.998938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:22.998950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:22.999045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:22.999055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:22.999260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:22.999272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:22.999479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:22.999492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:22.999685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:22.999697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:22.999815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:22.999827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.000012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.000024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.000187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.000199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.000327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.000339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.000501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.000514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.000694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.000707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.000808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.000820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.001004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.001017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.001248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.001261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.001358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.001369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.001538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.001550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.001739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.001752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.001865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.001877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.002106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.002119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.002394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.002407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.002533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.002545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.002750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.002763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.003020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.003031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.003155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.003167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.003346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.003360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.003481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.003492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.003605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.635 [2024-10-06 11:30:23.003617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.635 qpair failed and we were unable to recover it. 00:35:25.635 [2024-10-06 11:30:23.003781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.003792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.004044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.004062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.004160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.004173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.004287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.004300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.004407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.004419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.004601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.004614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.004806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.004818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.005020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.005033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.005158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.005171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.005270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.005282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.005515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.005528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.005636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.005649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.005931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.005945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.006126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.006139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.006389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.006402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.006585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.006598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.006710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.006722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.006987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.006999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.007178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.007192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.007371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.007383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.007506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.007518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.007635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.007649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.007776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.007788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.007903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.007916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.008090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.008103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.008286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.008298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.008529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.008542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.008741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.008753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.008916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.008929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.009136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.009149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.009268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.009280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.009405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.009418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.009537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.009550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.009649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.009662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.636 [2024-10-06 11:30:23.009786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.636 [2024-10-06 11:30:23.009798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.636 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.009986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.009999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.010113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.010126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.010268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.010284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.010388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.010400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.010584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.010596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.010687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.010698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.010883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.010895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.011170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.011183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.011349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.011362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.011568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.011581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.011766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.011778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.011949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.011962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.012076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.012089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.012218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.012230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.012314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.012324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.012501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.012514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.012669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.012682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.012874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.012887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.013084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.013098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.013284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.013297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.013480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.013492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.013610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.013623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.013890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.013903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.014077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.014091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.014207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.014220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.014336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.014349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.014475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.014488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.014620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.014633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.637 [2024-10-06 11:30:23.014785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.637 [2024-10-06 11:30:23.014798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.637 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.014899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.014915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.015031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.015044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.015236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.015251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.015350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.015363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.015494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.015506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.015673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.015686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.015869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.015883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.016055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.016071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.016179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.016193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.016295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.016308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.016410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.016423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.016616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.016628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.016725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.016738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.016874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.016886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.017006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.017020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.017178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.017191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.017431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.017445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.017575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.017588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.017772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.017785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.017998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.018011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.018122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.018135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.018321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.018335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.018503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.018516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.018659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.018673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.018780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.018794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.018976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.018989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.019198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.019212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.019396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.019409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.019642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.019656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.019759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.019772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.019957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.019970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.020153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.020166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.020354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.020367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.020554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-10-06 11:30:23.020567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.638 qpair failed and we were unable to recover it. 00:35:25.638 [2024-10-06 11:30:23.020766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.020779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.021031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.021044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.021243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.021256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.021511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.021525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.021654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.021667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.021821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.021835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.022073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.022090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.022270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.022283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.022461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.022474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.022598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.022612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.022837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.022850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.023032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.023045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.023170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.023183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.023298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.023310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.023434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.023447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.023624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.023637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.023808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.023820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.024102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.024116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.024284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.024297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.024426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.024439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.024611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.024623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.024814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.024826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.024956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.024968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.025147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.025160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.025276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.025289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.025453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.025465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.025577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.025589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.025787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.025799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.025980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.025993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.026154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.026168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.026357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.026370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.026550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.026563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.026686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.026699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.026823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.026836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.027071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-10-06 11:30:23.027084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.639 qpair failed and we were unable to recover it. 00:35:25.639 [2024-10-06 11:30:23.027222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.027234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.027334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.027346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.027516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.027529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.027696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.027708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.027894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.027907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.028013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.028026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.028152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.028165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.028296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.028309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.028447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.028460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.028622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.028634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.028833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.028845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.029018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.029033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.029240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.029253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.029350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.029364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.029489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.029502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.029596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.029609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.029766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.029778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.029941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.029954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.030074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.030087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.030374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.030387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.030511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.030524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.030698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.030711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.030831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.030843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.031040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.031052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.031235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.031248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.031358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.031370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.031547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.031559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.031746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.031760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.031942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.031955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.032083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.032097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.032338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.032351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.032533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.032545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.032743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.032756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.032923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.032935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.640 qpair failed and we were unable to recover it. 00:35:25.640 [2024-10-06 11:30:23.033125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.640 [2024-10-06 11:30:23.033137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.033270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.033283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.033399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.033411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.033586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.033599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.033857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.033871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.034038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.034050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.034179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.034192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.034306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.034318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.034429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.034442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.034609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.034622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.034744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.034756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.034932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.034945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.035128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.035142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.035314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.035327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.035512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.035525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.035638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.035650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.035771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.035784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.036037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.036052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.036335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.036349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.036458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.036470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.036705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.036717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.036951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.036963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.037065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.037078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.037253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.037266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.037458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.037471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.037591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.037603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.037865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.037878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.037990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.038001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.038241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.038254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.038439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.038452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.038725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.038737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.038922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.038934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.039081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.039095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.039223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.641 [2024-10-06 11:30:23.039236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.641 qpair failed and we were unable to recover it. 00:35:25.641 [2024-10-06 11:30:23.039337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.039348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.039466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.039479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.039651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.039663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.039780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.039793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.039970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.039982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.040148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.040160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.040349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.040362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.040546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.040558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.040803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.040815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.041001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.041013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.041319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.041332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.041512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.041525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.041757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.041769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.041870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.041883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.041975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.041987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.042103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.042116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.042305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.042318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.042504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.042516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.042648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.042660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.042770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.042780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.042899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.042912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.043120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.043133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.043372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.043385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.043583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.043597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.043710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.043722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.043912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.043925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.044040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.044052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.044263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.642 [2024-10-06 11:30:23.044275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.642 qpair failed and we were unable to recover it. 00:35:25.642 [2024-10-06 11:30:23.044444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.044457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.044575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.044587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.044770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.044783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.044899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.044912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.045180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.045193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.045453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.045465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.045581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.045593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.045803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.045816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.045985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.045999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.046301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.046315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.046443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.046457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.046644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.046657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.046776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.046790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.046990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.047004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.047171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.047184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.047357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.047371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.047546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.047560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.047806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.047819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.047996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.048009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.048124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.048137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.048260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.048273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.048457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.048470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.048639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.048653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.048808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.048821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.048993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.049006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.049190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.049204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.049344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.049357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.049490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.049502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.049654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.049667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.049831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.049844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.050016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.050029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.050216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.643 [2024-10-06 11:30:23.050230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.643 qpair failed and we were unable to recover it. 00:35:25.643 [2024-10-06 11:30:23.050397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.050411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.050590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.050603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.050782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.050795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.050934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.050950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.051214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.051229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.051462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.051475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.051660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.051673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.051773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.051785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.051958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.051971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.052162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.052176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.052296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.052309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.052472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.052486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.052609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.052621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.052752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.052765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.052938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.052952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.053080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.053093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.053273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.053286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.053472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.053485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.053661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.053674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.053845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.053858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.054126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.054140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.054272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.054285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.054408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.054421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.054677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.054690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.054855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.054868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.054964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.054976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.055145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.055158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.055279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.055292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.055413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.055425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.055534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.055548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.055741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.055754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.055854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.055867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.055998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.644 [2024-10-06 11:30:23.056011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.644 qpair failed and we were unable to recover it. 00:35:25.644 [2024-10-06 11:30:23.056126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.056139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.056328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.056341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.056532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.056546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.056750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.056763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.056885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.056898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.057013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.057027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.057201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.057214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.057349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.057363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.057565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.057579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.057816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.057830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.057946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.057962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.058076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.058089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.058321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.058334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.058462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.058475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.058583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.058597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.058769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.058782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.058889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.058902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.059073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.059087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.059208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.059221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.059348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.059362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.059620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.059632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.059743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.059756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.059856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.059868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.060066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.060079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.060194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.060208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.060386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.060399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.060557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.060571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.060684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.060697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.060782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.060793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.060963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.060976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.061087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.061100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.061273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.061286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.061475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.061488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.061665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.061680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.061791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.645 [2024-10-06 11:30:23.061803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.645 qpair failed and we were unable to recover it. 00:35:25.645 [2024-10-06 11:30:23.061924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.061939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.062041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.062053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.062182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.062196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.062319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.062332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.062446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.062459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.062644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.062656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.062766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.062777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.063047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.063062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.063254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.063264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.063451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.063461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.063652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.063662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.063920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.063931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.064045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.064054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.064270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.064280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.064389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.064398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.064498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.064510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.064691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.064701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.064899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.064909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.065072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.065083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.065266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.065276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.065527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.065538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.065792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.065804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.065925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.065936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.066045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.066055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.066305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.066317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.066501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.066512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.066703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.066715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.066834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.066845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.067009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.067019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.067123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.067135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.067315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.067328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.067475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.067486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.067616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.067627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.067709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.067719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.067890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.646 [2024-10-06 11:30:23.067902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.646 qpair failed and we were unable to recover it. 00:35:25.646 [2024-10-06 11:30:23.068095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.068107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.068254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.068267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.068401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.068414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.068592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.068604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.068781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.068793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.068907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.068919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.069043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.069055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.069228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.069241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.069372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.069385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.069509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.069522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.069625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.069638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.069736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.069747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.069984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.069996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.070115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.070130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.070322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.070334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.070453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.070466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.070581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.070593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.070756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.070770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.070973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.070986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.071069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.071081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.071217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.071232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.071398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.071411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.071580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.071592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.071780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.071791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.071979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.071991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.072255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.072267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.072451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.072465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.072558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.072569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.072702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.072714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.072881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.072892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.647 qpair failed and we were unable to recover it. 00:35:25.647 [2024-10-06 11:30:23.073076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.647 [2024-10-06 11:30:23.073089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.073291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.073304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.073567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.073580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.073762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.073774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.073895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.073907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.074144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.074156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.074257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.074268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.074516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.074528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.074710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.074722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.074955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.074968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.075078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.075090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.075334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.075347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.075476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.075488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.075673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.075685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.075805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.075817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.075930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.075942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.076192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.076205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.076407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.076420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.076584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.076596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.076771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.076784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.076901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.076913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.077029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.077042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.077221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.077233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.077409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.077421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.077683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.077696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.077901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.077913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.078118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.078131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.078206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.078219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.078330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.078341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.078495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.078508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.078689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.078704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.078812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.078824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.078940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.078952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.079075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.079089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.079239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.079252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.648 [2024-10-06 11:30:23.079510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.648 [2024-10-06 11:30:23.079522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.648 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.079705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.079717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.079886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.079898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.080147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.080160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.080342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.080355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.080477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.080490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.080667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.080681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.080916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.080928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.081110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.081123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.081313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.081326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.081444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.081456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.081572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.081585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.081678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.081690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.081803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.081817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.082073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.082087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.082264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.082277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.082447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.082461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.082656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.082669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.082852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.082865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.083051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.083076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.083192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.083206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.083318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.083339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.083548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.083561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.083797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.083810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.083935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.083949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.084067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.084080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.084193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.084207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.084371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.084383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.084572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.084585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.084841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.084854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.085038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.085051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.085221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.085233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.085364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.085377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.085555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.085569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.085681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.085694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.085805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.085820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.086083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.649 [2024-10-06 11:30:23.086109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.649 qpair failed and we were unable to recover it. 00:35:25.649 [2024-10-06 11:30:23.086233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.086245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.086414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.086426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.086616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.086630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.086753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.086766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.086893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.086906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.087023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.087035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.087169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.087182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.087353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.087366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.087489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.087502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.087680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.087693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.087791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.087803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.087989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.088002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.088128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.088142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.088322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.088336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.088499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.088511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.088687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.088700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.088880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.088893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.088992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.089005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.089192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.089206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.089329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.089340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.089509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.089522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.089647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.089660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.089850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.089863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.090040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.090052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.090237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.090250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.090416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.090429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.090600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.090614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.090730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.090742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.090867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.090880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.091005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.091018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.091195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.091209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.091380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.091393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.091671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.650 [2024-10-06 11:30:23.091684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.650 qpair failed and we were unable to recover it. 00:35:25.650 [2024-10-06 11:30:23.091867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.091880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.092002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.092013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.092190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.092203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.092436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.092450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.092637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.092650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.092768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.092782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.092982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.092995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.093183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.093196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.093384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.093397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.093527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.093540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.093656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.093669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.093878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.093891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.094089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.094103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.094222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.094235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.094367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.094379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.094484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.094496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.094619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.094632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.094892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.094905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.095096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.095110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.095235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.095249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.095349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.095360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.095565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.095577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.095839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.095852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.096025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.096038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.096212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.096225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.096347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.096360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.096476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.096489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.096617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.096629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.096796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.096809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.096938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.096951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.097219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.097232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.097363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.097377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.097562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.097576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.651 [2024-10-06 11:30:23.097707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.651 [2024-10-06 11:30:23.097720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.651 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.097924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.097937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.098126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.098139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.098310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.098323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.098509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.098521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.098631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.098645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.098755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.098768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.098966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.098980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.099106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.099119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.099295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.099308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.099423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.099435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.099549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.099562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.099691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.099706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.099834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.099847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.099970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.099983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.100155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.100168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.100282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.100296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.100409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.100422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.100525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.100539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.100613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.100625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.100812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.100826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.101005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.101019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.101198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.101212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.101399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.101413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.101594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.101608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.101716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.652 [2024-10-06 11:30:23.101729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.652 qpair failed and we were unable to recover it. 00:35:25.652 [2024-10-06 11:30:23.101845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.101860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.102031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.102044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.102234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.102247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.102424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.102439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.102700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.102714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.102925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.102939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.103129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.103143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.103338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.103352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.103643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.103657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.103773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.103787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.103906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.103920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.104202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.104215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.104395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.104408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.104564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.104579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.104703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.104716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.104951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.104964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.105162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.105176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.105370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.105384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.105502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.105516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.105693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.105706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.105813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.105826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.106015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.106029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.106266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.106280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.106394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.106408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.106579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.106594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.106777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.106790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.106921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.106933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.107099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.107112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.107227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.107240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.107321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.107332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.107431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.107444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.107577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.107590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.107756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.653 [2024-10-06 11:30:23.107769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.653 qpair failed and we were unable to recover it. 00:35:25.653 [2024-10-06 11:30:23.107878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.107890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.108074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.108089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.108218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.108231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.108419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.108432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.108605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.108619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.108739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.108752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.108931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.108944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.109126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.109140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.109310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.109323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.109423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.109434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.109682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.109696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.109867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.109880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.110006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.110019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.110135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.110149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.110263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.110275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.110459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.110472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.110663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.110677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.110854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.110868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.111079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.111093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.111277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.111290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.111405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.111421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.111541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.111554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.111644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.111656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.111840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.111853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.111976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.111989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.112092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.112104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.112363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.112376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.112512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.112524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.112711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.112724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.112980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.112993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.113229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.113242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.113379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.113392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.113662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.654 [2024-10-06 11:30:23.113674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.654 qpair failed and we were unable to recover it. 00:35:25.654 [2024-10-06 11:30:23.113836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.113848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.113952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.113965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.114144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.114157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.114269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.114283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.114464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.114477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.114666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.114679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.114796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.114808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.114991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.115004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.115182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.115195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.115430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.115443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.115554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.115567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.115745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.115758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.115938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.115951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.116119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.116133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.116305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.116318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.116489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.116502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.116669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.116683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.116860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.116872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.116992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.117004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.117190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.117204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.117392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.117405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.117685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.117699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.117908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.117921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.118030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.118043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.118176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.118189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.118278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.118289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.118405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.118419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.118595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.118610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.118724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.118738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.118872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.118884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.119174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.119188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.119376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.119389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.119522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.119536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.119736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.119749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.119868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.655 [2024-10-06 11:30:23.119881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.655 qpair failed and we were unable to recover it. 00:35:25.655 [2024-10-06 11:30:23.120116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.120131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.120389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.120401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.120564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.120576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.120684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.120696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.120813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.120826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.120940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.120952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.121141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.121155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.121341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.121354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.121581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.121594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.121707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.121720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.121889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.121903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.122017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.122030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.122269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.122282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.122407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.122420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.122601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.122614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.122798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.122812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.122991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.123004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.123188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.123201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.123318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.123331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.123571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.123585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.123768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.123781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.123967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.123980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.124227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.124240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.124343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.124356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.124485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.124498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.124695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.124709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.124829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.124842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.125075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.125088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.125273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.125286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.125457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.125470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.125640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.125653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.125768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.125780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.125863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.125876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.656 [2024-10-06 11:30:23.125971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.656 [2024-10-06 11:30:23.125982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.656 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.126153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.126167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.126266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.126277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.126489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.126502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.126614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.126627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.126795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.126808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.127072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.127086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.127367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.127379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.127544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.127558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.127762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.127775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.127889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.127901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.128180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.128194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.128320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.128343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.128444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.128456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.128699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.128712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.128862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.128875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.129076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.129090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.129280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.129293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.129406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.129419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.129585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.129599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.129731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.129744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.129952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.129964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.130146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.130159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.130343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.130356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.130484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.130497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.130678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.130691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.130944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.130957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.131147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.131161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.131337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.131350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.131547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.131558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.131689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.131702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.657 qpair failed and we were unable to recover it. 00:35:25.657 [2024-10-06 11:30:23.131960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.657 [2024-10-06 11:30:23.131974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.132091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.132105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.132280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.132293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.132485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.132499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.132687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.132699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.132946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.132959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.133063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.133075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.133367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.133380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.133558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.133572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.133762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.133775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.134033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.134046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.134185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.134199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.134376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.134388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.134506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.134520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.134757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.134769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.135043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.135056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.135241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.135254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.135420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.135432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.135636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.135650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.135885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.135898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.136077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.136091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.136257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.136270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.136451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.136465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.136650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.136663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.136785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.136798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.136910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.136921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.137055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.137072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.137256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.137269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.137394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.137407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.137591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.137604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.137720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.658 [2024-10-06 11:30:23.137733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.658 qpair failed and we were unable to recover it. 00:35:25.658 [2024-10-06 11:30:23.137850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.137863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.138036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.138048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.138237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.138250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.138422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.138435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.138617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.138630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.138818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.138832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.139008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.139021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.139208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.139222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.139405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.139418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.139526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.139538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.139652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.139664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.139896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.139909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.140112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.140126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.140388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.140401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.140601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.140614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.140726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.140738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.140852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.140865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.141049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.141068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.141243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.141256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.141528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.141540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.141655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.141667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.141872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.141885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.142073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.142087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.142209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.142222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.142405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.142419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.142542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.142554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.142718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.142731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.142842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.142855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.143041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.143053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.143160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.143173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.143290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.143303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.143408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.143422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.143589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.143601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.143784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.143796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.143912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.143925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.144093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.144106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.144236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.144249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.659 qpair failed and we were unable to recover it. 00:35:25.659 [2024-10-06 11:30:23.144439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.659 [2024-10-06 11:30:23.144453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.144570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.144581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.144854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.144867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.145129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.145142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.145328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.145340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.145599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.145613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.145786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.145799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.145907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.145920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.146035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.146049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.146251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.146265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.146433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.146446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.146566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.146579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.146786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.146799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.146982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.146995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.147109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.147122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.147241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.147255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.147428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.147440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.147553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.147566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.147813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.147825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.147980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.147993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.148091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.148107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.148270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.148284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.148531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.148544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.148749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.148762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.148962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.148975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.149137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.149150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.149334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.149347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.149463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.149476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.149654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.149667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.149769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.149782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.149914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.149928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.150168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.150181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.150349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.150361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.150538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.150550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.150748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.150761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.150944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.150957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.151159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.151173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.151451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.660 [2024-10-06 11:30:23.151464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.660 qpair failed and we were unable to recover it. 00:35:25.660 [2024-10-06 11:30:23.151565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.151577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.151731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.151743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.151843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.151856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.151964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.151976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.152104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.152118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.152238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.152250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.152505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.152519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.152617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.152630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.152864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.152877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.153117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.153131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.153250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.153263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.153519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.153533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.153649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.153662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.153870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.153883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.154071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.154084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.154276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.154290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.154472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.154485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.154602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.154614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.154804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.154818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.154945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.154958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.155146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.155161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.155410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.155424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.155524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.155538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.155609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.155621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.155808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.155821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.156004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.156018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.156211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.156225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.156429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.156441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.156613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.156626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.156752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.156765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.156886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.156900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.157111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.157125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.157205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.157216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.157346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.157359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.157620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.157633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.157749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.157762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.661 qpair failed and we were unable to recover it. 00:35:25.661 [2024-10-06 11:30:23.157958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.661 [2024-10-06 11:30:23.157971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.158094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.158108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.158364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.158378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.158500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.158514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.158683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.158696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.158798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.158811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.159024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.159037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.159234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.159247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.159355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.159368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.159536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.159549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.159813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.159826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.160009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.160021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.160202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.160215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.160348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.160362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.160553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.160567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.160733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.160746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.160859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.160872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.160987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.161000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.161257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.161271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.161507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.161520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.161622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.161635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.161708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.161720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.161952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.161965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.162097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.162110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.162227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.162251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.162361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.162374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.162494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.162509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.162613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.162627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.162805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.162817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.163011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.163023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.163939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.163966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.164124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.662 [2024-10-06 11:30:23.164138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.662 qpair failed and we were unable to recover it. 00:35:25.662 [2024-10-06 11:30:23.164253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.164265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.164476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.164488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.164626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.164638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.164832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.164844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.165026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.165038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.165188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.165202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.165406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.165418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.165665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.165677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.165923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.165936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.166131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.166144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.166330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.166342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.166497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.166510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.166631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.166643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.166757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.166770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.166981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.166994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.167092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.167103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.167274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.167288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.167408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.167421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.167590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.167602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.167712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.167725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.167865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.167877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.167995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.168009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.663 [2024-10-06 11:30:23.168118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.663 [2024-10-06 11:30:23.168130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.663 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.168302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.168315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.168482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.168495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.168605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.168618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.168837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.168850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.169029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.169041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.169304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.169318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.169497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.169510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.169629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.169642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.169774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.169786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.169948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.169961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.170128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.170141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.170328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.170343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.170492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.170504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.170673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.170684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.170857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.170870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.171103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.171116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.171294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.171308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.171411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.171422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.171605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.171617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.171812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.171826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.949 [2024-10-06 11:30:23.171944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.949 [2024-10-06 11:30:23.171957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.949 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.172069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.172081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.172280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.172293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.172472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.172485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.172588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.172601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.172783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.172795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.172913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.172926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.173120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.173133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.173309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.173322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.173534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.173547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.173656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.173670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.173781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.173793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.173909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.173921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.174078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.174091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.174265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.174279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.174392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.174404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.174590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.174603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.174707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.174719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.174837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.174849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.175032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.175045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.175223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.175237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.175416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.175439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.175632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.175645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.175745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.175755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.175918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.175930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.176110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.176122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.176225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.176237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.176473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.176485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.176597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.176608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.176806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.176819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.177004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.177015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.177139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.177153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.950 qpair failed and we were unable to recover it. 00:35:25.950 [2024-10-06 11:30:23.177266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.950 [2024-10-06 11:30:23.177277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.177458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.177471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.177576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.177587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.177690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.177702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.177824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.177835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.177952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.177964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.178134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.178147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.178313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.178325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.178494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.178507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.178681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.178693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.178793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.178806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.178968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.178979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.179120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.179132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.179236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.179248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.179420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.179432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.179610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.179622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.179808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.179819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.179917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.179929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.180094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.180106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.180284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.180296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.180416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.180429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.180529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.180540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.180653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.180663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.180784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.180796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.180903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.180915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.181086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.181097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.181196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.181209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.181352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.181363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.181447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.181457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.181642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.181654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.181842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.181853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.951 [2024-10-06 11:30:23.181965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.951 [2024-10-06 11:30:23.181977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.951 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.182137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.182149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.182267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.182279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.182371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.182381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.182499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.182510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.182614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.182625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.182725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.182737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.182909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.182921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.183031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.183045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.183297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.183309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.183491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.183503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.183619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.183631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.183794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.183806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.184036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.184048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.184232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.184244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.184408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.184420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.184664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.184677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.184840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.184852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.185051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.185068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.185205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.185216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.185377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.185389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.185501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.185513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.185636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.185648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.185828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.185839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.185939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.185949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.186067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.186080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.186191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.186203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.186407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.186420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.186597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.186609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.186725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.186736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.186944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.186956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.187125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.187138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.187270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.187282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.187385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.187398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.952 qpair failed and we were unable to recover it. 00:35:25.952 [2024-10-06 11:30:23.187514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.952 [2024-10-06 11:30:23.187525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.187626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.187638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.187765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.187776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.187874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.187886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.187978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.187990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.188092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.188105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.188279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.188291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.188386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.188398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.188486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.188497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.188608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.188621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.188751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.188762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.188872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.188885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.188990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.189001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.189115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.189127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.189286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.189300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.189407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.189420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.189667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.189679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.189842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.189853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.190017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.190029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.190198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.190210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.190324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.190336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.190521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.190533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.190655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.190668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.190902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.190914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.191017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.191028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.191196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.191209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.191389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.191401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.191572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.191585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.191749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.191761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.191889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.191901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.192183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.192195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.953 qpair failed and we were unable to recover it. 00:35:25.953 [2024-10-06 11:30:23.192291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.953 [2024-10-06 11:30:23.192303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.192549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.192561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.192676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.192687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.192889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.192901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.193014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.193025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.193199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.193211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.193374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.193387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.193566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.193578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.193833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.193845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.194055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.194071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.194205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.194217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.194380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.194391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.194556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.194569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.194669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.194681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.194777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.194789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.194954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.194966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.195205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.195217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.195328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.195340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.195470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.195482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.195620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.195632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.195802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.195813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.195977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.195990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.196182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.196194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.196308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.196320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.196442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.196454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.196575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.196587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.196675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.196686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.954 [2024-10-06 11:30:23.196813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.954 [2024-10-06 11:30:23.196825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.954 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.196960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.196971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.197150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.197162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.197272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.197284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.197470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.197482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.197604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.197616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.197741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.197753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.197928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.197939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.198056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.198069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.198248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.198259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.198378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.198390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.198564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.198576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.198753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.198765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.198938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.198950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.199156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.199168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.199273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.199285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.199477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.199489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.199574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.199584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.199787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.199799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.199990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.200002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.200182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.200194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.200306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.200318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.200554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.200566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.200672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.200686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.200851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.200863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.200973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.200984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.201093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.201106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.201226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.201239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.201344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.201355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.201603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.201615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.201749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.201761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.202028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.202040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.202245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.955 [2024-10-06 11:30:23.202257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.955 qpair failed and we were unable to recover it. 00:35:25.955 [2024-10-06 11:30:23.202388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.202400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.202603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.202615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.202728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.202739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.202836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.202847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.202956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.202968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.203198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.203210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.203325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.203337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.203520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.203533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.203644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.203656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.203774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.203786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.203904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.203916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.204019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.204029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.204205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.204217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.204330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.204342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.204507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.204519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.204627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.204639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.204880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.204892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.204993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.205003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.205179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.205191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.205374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.205385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.205659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.205671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.205892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.205905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.206089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.206102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.206273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.206285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.206473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.206484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.206608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.206619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.206796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.206808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.206935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.206946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.207151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.207163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.207259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.207272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.207454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.207470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.207659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.207672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.207800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.207812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.956 [2024-10-06 11:30:23.207945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.956 [2024-10-06 11:30:23.207956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.956 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.208091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.208104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.208204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.208216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.208377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.208390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.208503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.208515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.208624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.208636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.208805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.208818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.208890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.208901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.209154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.209168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.209402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.209415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.209532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.209544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.209655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.209669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.209771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.209783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.209965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.209976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.210142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.210154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.210334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.210346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.210522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.210535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.210650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.210662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.210768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.210779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.210882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.210894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.211004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.211015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.211148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.211161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.211328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.211339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.211472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.211485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.211667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.211678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.211859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.211871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.212051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.212068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.212182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.212194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.212475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.212488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.212649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.212662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.212835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.212846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.213016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.213028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.213262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.213274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.213391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.957 [2024-10-06 11:30:23.213403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.957 qpair failed and we were unable to recover it. 00:35:25.957 [2024-10-06 11:30:23.213519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.213531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.213639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.213650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.213754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.213766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.213929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.213943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.214228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.214241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.214320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.214330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.214604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.214616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.214747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.214759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.214888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.214901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.215145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.215158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.215254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.215267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.215363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.215375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.215472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.215483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.215582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.215594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.215779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.215790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.215996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.216008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.216177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.216189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.216365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.216377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.216478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.216490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.216664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.216675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.216805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.216817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.216938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.216950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.217145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.217158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.217271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.217283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.217484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.217495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.217653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.217665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.217783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.217795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.217999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.218011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.218288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.218300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.218430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.218441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.218516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.218527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.218642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.218653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.218860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.218873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.218988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.218999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.219199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.219211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.219331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.219343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.219461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.219473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.219650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.219661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.219898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.219910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.958 [2024-10-06 11:30:23.220099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.958 [2024-10-06 11:30:23.220111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.958 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.220224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.220236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.220425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.220437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.220603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.220615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.220800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.220814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.220915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.220928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.221120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.221133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.221265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.221277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.221396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.221408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.221569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.221580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.221749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.221761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.221874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.221887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.222068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.222081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.222263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.222275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.222443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.222455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.222567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.222579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.222763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.222776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.222885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.222897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.223013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.223025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.223206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.223218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.223329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.223342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.223598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.223610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.223793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.223805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.223985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.223997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.224108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.224120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.224371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.224383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.224563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.224575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.224706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.224719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.224830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.959 [2024-10-06 11:30:23.224842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.959 qpair failed and we were unable to recover it. 00:35:25.959 [2024-10-06 11:30:23.224953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.224965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.225142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.225154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.225316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.225328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.225444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.225456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.225556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.225569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.225703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.225714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.225915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.225927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.226097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.226110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.226314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.226326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.226424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.226435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.226561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.226573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.226740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.226753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.226887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.226899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.226996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.227008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.227214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.227227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.227412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.227425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.227629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.227642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.227873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.227885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.228050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.228067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.228298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.228310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.228411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.228423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.228693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.228706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.228888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.228900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.229082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.229095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.229277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.229288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.229400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.229411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.229581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.229593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.229688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.229700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.229815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.229827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.230049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.230066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.230238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.230250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.230427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.230439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.960 [2024-10-06 11:30:23.230543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.960 [2024-10-06 11:30:23.230556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.960 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.230648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.230660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.230762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.230775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.230952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.230964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.231080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.231093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.231276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.231288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.231407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.231418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.231518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.231530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.231718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.231729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.231834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.231846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.231952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.231964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.232034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.232045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.232168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.232180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.232341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.232354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.232525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.232537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.232660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.232671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.232780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.232792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.232966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.232977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.233089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.233101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.233269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.233281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.233445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.233456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.233701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.233712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.233906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.233918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.234023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.234037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.234142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.234154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.234391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.234404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.234583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.234595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.234705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.234718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.234890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.234902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.235096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.235108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.235224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.235236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.235411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.235423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.235537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.235549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.961 qpair failed and we were unable to recover it. 00:35:25.961 [2024-10-06 11:30:23.235657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.961 [2024-10-06 11:30:23.235670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.235851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.235863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.235958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.235971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.236145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.236159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.236266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.236278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.236386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.236398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.236499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.236511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.236772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.236784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.237023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.237035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.237217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.237230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.237349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.237361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.237448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.237461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.237662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.237674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.237906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.237918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.238040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.238053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.238244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.238256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.238436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.238448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.238629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.238642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.238807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.238827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.239008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.239020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.239118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.239130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.239344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.239357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.239484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.239496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.239656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.239668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.239780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.239793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.239866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.239877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.240084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.240097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.240222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.240233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.240341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.240353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.240536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.240548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.240679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.240694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.240817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.240829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.962 [2024-10-06 11:30:23.241014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.962 [2024-10-06 11:30:23.241026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.962 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.241204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.241216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.241388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.241400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.241515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.241526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.241637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.241649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.241855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.241867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.241996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.242008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.242135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.242148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.242314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.242326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.242480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.242493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.242605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.242618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.242801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.242812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.242998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.243011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.243191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.243203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.243370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.243382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.243507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.243520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.243719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.243731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.243902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.243914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.244069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.244081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.244298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.244310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.244554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.244566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.244806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.244818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.244995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.245007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.245116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.245128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.245288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.245300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.245396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6946a0 is same with the state(6) to be set 00:35:25.963 [2024-10-06 11:30:23.245649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.245694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.245906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.245945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.246056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.246082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.246207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.246225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.246413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.246431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.246629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.246646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.246750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.246764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.963 [2024-10-06 11:30:23.246876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.963 [2024-10-06 11:30:23.246888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.963 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.247010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.247022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.247202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.247214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.247389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.247401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.247557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.247570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.247813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.247825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.248003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.248015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.248131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.248144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.248379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.248391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.248648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.248660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.248790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.248802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.248968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.248981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.249078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.249090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.249273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.249285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.249451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.249463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.249569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.249582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.249691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.249702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.249887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.249899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.250016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.250028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.250200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.250214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.250313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.250326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.250507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.250519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.250774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.250786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.250952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.250964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.251171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.251184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.251352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.251365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.251541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.251554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.251716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.251729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.251970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.251982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.252095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.252108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.252339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.252351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.252513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.252525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.252695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.252707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.964 qpair failed and we were unable to recover it. 00:35:25.964 [2024-10-06 11:30:23.252919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.964 [2024-10-06 11:30:23.252932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.253194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.253206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.253442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.253454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.253562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.253574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.253754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.253765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.253864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.253875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.254062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.254074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.254330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.254342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.254450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.254462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.254617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.254628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.254764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.254776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.254956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.254968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.255133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.255146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.255322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.255334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.255517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.255528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.255760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.255772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.255883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.255895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.256045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.256061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.256161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.256172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.256335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.256347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.256537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.256549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.256745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.256756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.256935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.256948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.257152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.257164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.257331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.257342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.257511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.257523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.257753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.257767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.258051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.258068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.258180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.258192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.258366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.258378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.258477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.258488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.258670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.258681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.258858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.258870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.259068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.259080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.259262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.259275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.259387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.259399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.259640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.259652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.965 [2024-10-06 11:30:23.259882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.965 [2024-10-06 11:30:23.259893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.965 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.260021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.260033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.260160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.260174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.260341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.260353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.260467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.260480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.260637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.260648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.260825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.260837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.260928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.260939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.261022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.261035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.261279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.261292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.261481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.261493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.261684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.261695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.261870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.261882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.262117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.262129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.262239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.262251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.262334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.262344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.262602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.262614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.262726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.262737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.262931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.262942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.263123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.263135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.263240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.263252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.263429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.263441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.263604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.263616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.263806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.263819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.263925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.263935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.264118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.264131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.264295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.264307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.264569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.264580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.264701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.264712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.264914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.264931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.265113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.265133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.265240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.265252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.265374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.265385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.265457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.265468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.265656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.265668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.265863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.265876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.266077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.266089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.266210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.266222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.266331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.266343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.266458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.266469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.266633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.266646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.266776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.266788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.266900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.966 [2024-10-06 11:30:23.266913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.966 qpair failed and we were unable to recover it. 00:35:25.966 [2024-10-06 11:30:23.267199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.267212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.267386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.267397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.267476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.267487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.267649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.267661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.267863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.267875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.267973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.267986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.268203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.268216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.268327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.268338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.268573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.268585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.268749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.268761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.268888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.268905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.269140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.269153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.269239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.269249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.269462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.269475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.269660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.269671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.269871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.269883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.270067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.270079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.270285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.270296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.270475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.270487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.270718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.270729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.270850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.270862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.270987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.270999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.271078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.271090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.271182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.271192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.271363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.271375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.271548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.271560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.271724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.271737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.271926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.271939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.272120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.272132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.272315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.272328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.272506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.272518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.272704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.272717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.272880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.272892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.273075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.273088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.273196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.273207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.273435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.273447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.273633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.273645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.273835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.273847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.273958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.273970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.274202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.274215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.274469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.274481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.967 qpair failed and we were unable to recover it. 00:35:25.967 [2024-10-06 11:30:23.274667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.967 [2024-10-06 11:30:23.274679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.274865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.274877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.275164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.275177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.275359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.275370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.275602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.275614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.275793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.275805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.276010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.276022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.276145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.276157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.276227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.276238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.276435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.276448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.276629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.276640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.276766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.276778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.276913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.276925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.277056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.277080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.277336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.277348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.277447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.277457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.277561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.277571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.277703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.277715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.277830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.277841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.277949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.277960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.278118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.278131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.278263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.278275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.278440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.278451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.278566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.278579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.278681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.278692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.278802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.278816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.279054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.279074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.279191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.279202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.279402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.279414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.279535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.279548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.279648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.279659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.279828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.279840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.280094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.280106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.280219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.280230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.280464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.280476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.280714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.280726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.280957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.280969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.281132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.281145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.281321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.281333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.281511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.281523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.281642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.281655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.968 [2024-10-06 11:30:23.281831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.968 [2024-10-06 11:30:23.281844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.968 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.282013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.282025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.282149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.282162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.282269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.282281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.282393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.282405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.282637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.282649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.282823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.282835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.283041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.283053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.283235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.283247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.283362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.283374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.283483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.283495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.283666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.283678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.283776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.283786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.284017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.284029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.284234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.284246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.284416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.284427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.284664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.284675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.284761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.284772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.284956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.284968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.285082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.285094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.285267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.285279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.285395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.285407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.285515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.285526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.285728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.285741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.285930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.285944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.286062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.286075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.286195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.286207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.286398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.286410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.286580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.286591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.286712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.286723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.286963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.286975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.287076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.287087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.287188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.287199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.287376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.287388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.969 [2024-10-06 11:30:23.287551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.969 [2024-10-06 11:30:23.287563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.969 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.287739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.287751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.287953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.287965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.288135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.288147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.288267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.288279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.288460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.288473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.288619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.288633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.288738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.288748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.288936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.288948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.289083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.289095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.289261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.289273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.289390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.289403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.289594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.289606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.289708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.289719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.289904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.289917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.290103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.290116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.290295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.290307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.290398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.290410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.290573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.290585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.290676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.290687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.290816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.290826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.291009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.291021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.291147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.291161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.291372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.291385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.291502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.291515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.291610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.291620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.291755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.291767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.291942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.291954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.292064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.292076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.292248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.292260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.292370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.292384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.292555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.292567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.292680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.292692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.292815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.292826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.292990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.970 [2024-10-06 11:30:23.293003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.970 qpair failed and we were unable to recover it. 00:35:25.970 [2024-10-06 11:30:23.293111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.293123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.293197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.293207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.293311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.293322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.293506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.293519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.293772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.293784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.293888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.293901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.294075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.294087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.294208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.294220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.294338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.294350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.294582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.294594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.294848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.294860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.295021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.295033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.295131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.295142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.295255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.295267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.295375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.295385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.295493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.295505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.295660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.295672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.295904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.295915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.296147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.296160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.296271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.296283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.296457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.296469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.296594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.296606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.296840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.296852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.297031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.297044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.297166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.297178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.297355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.297368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.297474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.297486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.297581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.297593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.297767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.297780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.297943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.297954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.298056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.298074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.298193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.298204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.298381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.298393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.298478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.298488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.298625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.298637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.298797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.298812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.298976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.298988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.299172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.299186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.299285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.299297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.299556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.299568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.299740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.299751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.971 [2024-10-06 11:30:23.299855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.971 [2024-10-06 11:30:23.299867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.971 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.300031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.300042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.300181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.300194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.300372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.300384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.300557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.300568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.300800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.300813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.300917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.300930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.301026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.301038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.301167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.301180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.301303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.301316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.301422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.301434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.301617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.301629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.301811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.301823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.301952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.301964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.302151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.302164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.302349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.302362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.302525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.302537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.302662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.302675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.302775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.302787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.302968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.302980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.303164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.303176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.303284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.303296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.303495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.303507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.303808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.303820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.303926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.303939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.304137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.304149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.304425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.304437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.304591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.304603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.304786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.304797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.304995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.305007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.305122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.305133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.305311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.305323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.305504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.305516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.305706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.305718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.305885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.305899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.305999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.306011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.306110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.972 [2024-10-06 11:30:23.306123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.972 qpair failed and we were unable to recover it. 00:35:25.972 [2024-10-06 11:30:23.306237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.306250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.306425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.306437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.306692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.306704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.306815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.306827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.306925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.306936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.307053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.307070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.307153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.307164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.307327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.307339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.307434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.307445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.307636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.307648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.307828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.307840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.308077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.308090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.308209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.308221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.308304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.308315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.308447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.308459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.308635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.308647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.308767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.308780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.309016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.309028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.309152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.309164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.309289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.309302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.309515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.309527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.309630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.309642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.309757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.309769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.309933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.309946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.310132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.310145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.310320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.310331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.310459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.310471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.310592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.310605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.310728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.310739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.310952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.310963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.311079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.311092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.311191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.311203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.311307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.311320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.311474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.311486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.311604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.311617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.311703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.311715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.311783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.973 [2024-10-06 11:30:23.311793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.973 qpair failed and we were unable to recover it. 00:35:25.973 [2024-10-06 11:30:23.311952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.311966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.312089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.312101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.312258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.312270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.312380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.312392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.312580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.312591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.312690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.312701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.312974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.312986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.313230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.313243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.313446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.313458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.313532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.313542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.313631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.313642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.313823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.313835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.313942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.313953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.314125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.314137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.314295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.314307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.314434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.314447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.314564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.314576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.314739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.314751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.314949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.314961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.315129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.315141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.315318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.315330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.315567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.315579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.315756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.315768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.315880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.315892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.316146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.316159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.316265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.316277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.316439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.316451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.316614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.316625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.316790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.316802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.316970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.316982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.317171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.317183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.317374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.317386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.317569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.317581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.317691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.317703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.317941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.317954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.318122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.318138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.318310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.318321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.318496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.318508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.318702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.318713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.318900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.318913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.319094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.319108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.319237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.974 [2024-10-06 11:30:23.319248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.974 qpair failed and we were unable to recover it. 00:35:25.974 [2024-10-06 11:30:23.319506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.319519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.319755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.319767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.319885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.319897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.320158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.320171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.320351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.320363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.320532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.320544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.320787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.320799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.320916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.320928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.321110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.321123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.321228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.321240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.321408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.321419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.321605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.321617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.321734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.321745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.322004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.322016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.322262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.322273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.322440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.322452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.322627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.322640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.322760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.322771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.322867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.322879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.323143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.323155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.323318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.323330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.323488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.323500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.323619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.323631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.323746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.323758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.323934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.323945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.324152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.324174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.324286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.324303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.324440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.324458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.324596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.324613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.324790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.324807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.324935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.324952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.325149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.325162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.325285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.325297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.325467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.325478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.325573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.325585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.325751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.325763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.325890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.325902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.326013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.326025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.326230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.326242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.326448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.326460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.326587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.326600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.975 qpair failed and we were unable to recover it. 00:35:25.975 [2024-10-06 11:30:23.326782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.975 [2024-10-06 11:30:23.326794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.326916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.326928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.327125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.327137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.327253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.327265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.327371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.327383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.327484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.327495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.327724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.327736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.327911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.327922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.328019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.328031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.328236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.328249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.328387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.328399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.328506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.328518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.328638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.328651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.328763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.328775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.328937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.328949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.329117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.329129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.329307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.329319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.329498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.329510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.329622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.329633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.329744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.329755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.329928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.329940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.330050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.330065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.330222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.330234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.330409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.330422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.330597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.330611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.330722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.330734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.330924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.330935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.331119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.331132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.331392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.331403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.331604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.331616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.331767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.331779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.332041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.332052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.332245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.332257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.332434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.332446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.332620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.332631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.332797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.332809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.333007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.333020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.333200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.333213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.333452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.333464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.333639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.333651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.333747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.333758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.333944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.976 [2024-10-06 11:30:23.333956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.976 qpair failed and we were unable to recover it. 00:35:25.976 [2024-10-06 11:30:23.334135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.334148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.334403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.334416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.334611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.334623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.334706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.334716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.334902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.334915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.335081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.335094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.335343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.335355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.335590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.335601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.335749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.335761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.335934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.335946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.336048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.336064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.336323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.336336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.336502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.336514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.336692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.336703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.336974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.336986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.337152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.337164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.337248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.337258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.337436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.337447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.337626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.337638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.337736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.337747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.337873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.337885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.338003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.338016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.338317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.338331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.338565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.338578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.338691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.338703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.338834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.338846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.339102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.339114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.339285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.339297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.339505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.339517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.339711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.339722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.340007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.340019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.340187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.340199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.340385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.340397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.340511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.340523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.340645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.340657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.340912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.977 [2024-10-06 11:30:23.340924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.977 qpair failed and we were unable to recover it. 00:35:25.977 [2024-10-06 11:30:23.341095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.341107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.341221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.341232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.341374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.341386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.341546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.341557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.341673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.341686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.341885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.341896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.342089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.342101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.342356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.342368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.342503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.342515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.342716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.342729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.342924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.342936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.343034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.343045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.343234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.343246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.343446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.343457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.343589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.343600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.343772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.343784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.343899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.343911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.344017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.344027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.344135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.344145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.344312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.344325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.344437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.344450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.344711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.344724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.344830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.344842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.345019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.345032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.345211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.345223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.345399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.345411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.345578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.345592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.345796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.345808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.345916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.345928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.346093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.346106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.346201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.346211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.346334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.346345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.346495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.346507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.346668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.346680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.346910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.346922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.347031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.347043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.347310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.347323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.347499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.347511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.347619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.347631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.347809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.347821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.347985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.347997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.348125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.348140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.978 [2024-10-06 11:30:23.348316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.978 [2024-10-06 11:30:23.348328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.978 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.348565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.348577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.348690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.348701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.348935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.348948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.349186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.349198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.349432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.349444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.349619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.349631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.349752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.349762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.349874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.349884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.350072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.350083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.350259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.350271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.350408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.350420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.350527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.350538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.350622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.350633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.350889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.350902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.351075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.351087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.351254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.351266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.351387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.351400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.351506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.351517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.351706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.351718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.351944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.351956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.352119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.352131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.352360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.352372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.352559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.352572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.352644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.352656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.352838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.352850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.353020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.353031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.353267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.353280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.353361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.353371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.353555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.353567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.353733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.353744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.353925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.353937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.354102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.354114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.354238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.354249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.354425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.354437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.354565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.354577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.354809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.354821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.355002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.355014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.355130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.355142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.355237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.355247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.355434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.355447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.979 qpair failed and we were unable to recover it. 00:35:25.979 [2024-10-06 11:30:23.355569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.979 [2024-10-06 11:30:23.355582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.355762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.355774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.355955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.355968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.356135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.356147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.356306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.356319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.356509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.356521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.356711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.356723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.356807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.356817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.356999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.357012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.357180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.357192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.357395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.357407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.357644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.357656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.357834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.357846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.358025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.358037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.358151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.358162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.358276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.358287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.358422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.358435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.358610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.358622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.358746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.358757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.358880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.358892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.359077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.359090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.359200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.359211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.359424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.359436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.359552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.359566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.359743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.359754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.359874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.359886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.360063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.360076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.360312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.360323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.360584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.360597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.360707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.360717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.360903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.360915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.361094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.361107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.361354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.361365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.361525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.361537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.361655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.361667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.361884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.361895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.362009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.362021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.362238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.362250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.362435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.362447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.362542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.362552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.362819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.362831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.362995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.980 [2024-10-06 11:30:23.363006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.980 qpair failed and we were unable to recover it. 00:35:25.980 [2024-10-06 11:30:23.363208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.363220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.363462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.363474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.363662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.363674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.363883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.363895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.364019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.364031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.364263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.364275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.364381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.364393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.364564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.364576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.364758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.364771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.364960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.364972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.365155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.365167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.365281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.365293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.365523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.365535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.365716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.365727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.365892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.365904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.365988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.365999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.366130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.366142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.366311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.366324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.366523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.366535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.366650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.366662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.366804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.366816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.366993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.367006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.367184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.367196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.367291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.367302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.367487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.367501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.367611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.367622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.367800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.367811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.367994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.368005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.368126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.368138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.368355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.368367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.368499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.368510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.368604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.368614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.368797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.368808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.369045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.369057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.369257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.369270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.369461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.369473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.369682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.369694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.369827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.369839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.370018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.981 [2024-10-06 11:30:23.370030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.981 qpair failed and we were unable to recover it. 00:35:25.981 [2024-10-06 11:30:23.370319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.370332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.370560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.370572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.370664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.370675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.370811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.370822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.371022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.371035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.371232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.371244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.371372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.371384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.371555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.371567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.371735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.371746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.371911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.371922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.372113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.372125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.372293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.372305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.372500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.372512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.372623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.372633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.372803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.372815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.372922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.372932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.373110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.373123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.373234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.373246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.373427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.373438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.373599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.373610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.373724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.373734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.373977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.373989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.374158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.374172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.374271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.374282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.374397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.374408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.374592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.374604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.374836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.374848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.375019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.375031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.375149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.375162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.375278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.375289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.375406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.375418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.375499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.375511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.375650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.375661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.375758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.375769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.375932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.375943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.376066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.982 [2024-10-06 11:30:23.376079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.982 qpair failed and we were unable to recover it. 00:35:25.982 [2024-10-06 11:30:23.376255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.376267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.376435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.376447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.376623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.376635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.376766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.376778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.376880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.376890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.377064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.377077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.377277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.377289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.377462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.377474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.377610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.377622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.377809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.377821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.378057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.378081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.378337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.378349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.378517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.378529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.378665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.378678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.378790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.378801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.378925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.378937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.379045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.379065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.379181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.379193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.379422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.379435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.379603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.379614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.379788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.379801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.379973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.379985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.380097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.380108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.380342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.380354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.380541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.380553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.380723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.380735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.380911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.380932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.381056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.381071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.381179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.381191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.381307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.381318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.381554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.381566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.381774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.381786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.381959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.381971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.382154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.382166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.382276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.382288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.382455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.382467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.382575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.382586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.382714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.382726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.382979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.382991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.383230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.383242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.383362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.383374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.383551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.383564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.983 qpair failed and we were unable to recover it. 00:35:25.983 [2024-10-06 11:30:23.383730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.983 [2024-10-06 11:30:23.383741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.383920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.383932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.384194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.384206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.384321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.384333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.384508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.384519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.384624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.384636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.384731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.384743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.384847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.384859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.385035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.385047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.385285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.385297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.385558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.385570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.385717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.385729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.385859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.385871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.386118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.386131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.386253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.386266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.386446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.386458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.386690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.386702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.386809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.386821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.387028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.387040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.387232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.387245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.387360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.387370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.387545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.387557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.387683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.387696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.387803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.387815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.387981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.387995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.388171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.388184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.388378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.388389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.388541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.388554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.388667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.388679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.388768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.388778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.388893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.388904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.389092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.389105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.389276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.389287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.389475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.389488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.389564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.389574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.389683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.389695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.389917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.389928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.390105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.390117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.390294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.390306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.390436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.390448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.390551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.390564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.390660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.390670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.984 qpair failed and we were unable to recover it. 00:35:25.984 [2024-10-06 11:30:23.390861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.984 [2024-10-06 11:30:23.390874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.390980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.390991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.391119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.391130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.391251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.391264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.391434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.391446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.391557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.391570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.391804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.391816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.391925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.391936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.392172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.392186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.392389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.392402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.392498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.392509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.392638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.392650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.392834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.392846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.392965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.392977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.393076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.393086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.393349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.393361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.393545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.393557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.393684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.393697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.393930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.393941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.394028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.394038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.394205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.394217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.394451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.394463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.394560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.394574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.394702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.394713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.394966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.394979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.395156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.395169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.395355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.395367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.395574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.395586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.395738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.395749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.395815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.395826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.395933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.395945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.396126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.396139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.396259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.396272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.396372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.396384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.396623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.396635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.396842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.396854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.397045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.397057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.397316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.397329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.397530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.397541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.397712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.397724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.397844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.397856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.398035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.985 [2024-10-06 11:30:23.398047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.985 qpair failed and we were unable to recover it. 00:35:25.985 [2024-10-06 11:30:23.398148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.398161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.398323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.398336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.398454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.398466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.398651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.398664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.398919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.398930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.399113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.399125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.399330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.399341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.399457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.399469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.399650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.399662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.399842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.399854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.399959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.399971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.400151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.400163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.400274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.400286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.400463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.400475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.400599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.400611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.400777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.400790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.400962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.400974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.401138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.401150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.401330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.401342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.401537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.401548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.401779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.401793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.402074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.402087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.402274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.402287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.402398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.402410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.402516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.402527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.402702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.402714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.402814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.402825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.402937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.402948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.403066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.403079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.403313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.403325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.403508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.403520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.403659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.403671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.403927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.403940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.404054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.404070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.404178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.404190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.404389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.986 [2024-10-06 11:30:23.404402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.986 qpair failed and we were unable to recover it. 00:35:25.986 [2024-10-06 11:30:23.404526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.404538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.404786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.404798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.404916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.404928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.405183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.405195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.405320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.405332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.405523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.405534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.405636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.405649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.405761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.405773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.405984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.405995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.406140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.406152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.406264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.406276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.406443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.406455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.406625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.406637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.406836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.406847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.406931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.406942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.407049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.407064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.407232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.407244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.407418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.407430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.407630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.407642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.407805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.407817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.407986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.407998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.408212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.408224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.408404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.408416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.408521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.408533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.408636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.408650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.408850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.408863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.408998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.409010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.409117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.409128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.409311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.409323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.409492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.409504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.409685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.409696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.409880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.409891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.410067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.410079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.410273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.410285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.410477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.410489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.410617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.410629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.410807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.410819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.410992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.411004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.411209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.411221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.411340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.411352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.411474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.411486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.411597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.411609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.987 qpair failed and we were unable to recover it. 00:35:25.987 [2024-10-06 11:30:23.411789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.987 [2024-10-06 11:30:23.411801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.411963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.411975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.412078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.412090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.412196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.412207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.412376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.412388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.412666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.412679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.412796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.412807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.412998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.413009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.413110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.413121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.413311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.413323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.413512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.413525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.413701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.413713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.413916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.413928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.414033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.414045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.414219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.414231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.414447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.414459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.414572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.414583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.414751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.414763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.414948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.414960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.415123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.415136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.415250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.415262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.415392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.415404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.415506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.415518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.415706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.415718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.415901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.415913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.416081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.416093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.416279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.416291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.416429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.416441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.416646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.416659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.416764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.416776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.417013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.417025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.417140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.417152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.417327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.417339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.417542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.417554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.417680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.417692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.417853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.417865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.417982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.417993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.418163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.418176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.418277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.418288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.418421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.418434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.418666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.418678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.418901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-10-06 11:30:23.418913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.988 qpair failed and we were unable to recover it. 00:35:25.988 [2024-10-06 11:30:23.419038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.419050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.419242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.419255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.419367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.419379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.419503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.419514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.419672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.419684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.419812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.419824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.419923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.419936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.420108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.420123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.420308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.420320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.420486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.420499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.420664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.420675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.420786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.420798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.420969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.420981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.421162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.421174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.421353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.421364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.421569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.421581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.421709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.421721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.421931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.421943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.422118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.422130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.422265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.422278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.422511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.422523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.422632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.422644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.422752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.422764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.423017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.423029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.423182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.423194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.423303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.423315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.423454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.423465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.423698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.423710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.423838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.423850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.423983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.423994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.424179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.424191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.424314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.424326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.424489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.424501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.424672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.424684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.424796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.424808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.424926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.424937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.425140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.425152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.425271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.425283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.425382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.425394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.425508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.425519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.425774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.425785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.989 [2024-10-06 11:30:23.425896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-10-06 11:30:23.425908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.989 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.426016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.426027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.426128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.426140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.426242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.426253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.426350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.426362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.426469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.426481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.426656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.426670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.426760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.426771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.426944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.426957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.427121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.427133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.427297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.427309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.427493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.427505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.427617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.427629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.427762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.427774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.427951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.427963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.428201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.428213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.428381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.428393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.428573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.428584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.428767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.428780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.429041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.429053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.429233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.429245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.429426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.429438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.429604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.429616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.429871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.429883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.430002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.430014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.430182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.430195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.430406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.430418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.430591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.430603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.430742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.430753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.430933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.430945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.431195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.431208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.431463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.431475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.431585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.431596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.431854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.431866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.432045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.432056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.432309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.432321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.432509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-10-06 11:30:23.432520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.990 qpair failed and we were unable to recover it. 00:35:25.990 [2024-10-06 11:30:23.432793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.432805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.432907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.432919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.433154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.433166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.433366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.433377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.433544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.433556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.433653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.433673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.433841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.433853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.433988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.434000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.434161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.434174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.434408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.434422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.434541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.434552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.434730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.434741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.434841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.434852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.434989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.435001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.435186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.435198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.435373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.435385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.435506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.435519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.435639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.435651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.435819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.435832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.435929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.435941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.436043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.436054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.436290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.436303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.436407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.436417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.436530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.436541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.436653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.436664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.436832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.436844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.436972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.436984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.437153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.437164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.437326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.437339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.437445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.437456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.437613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.437625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.437734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.437746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.437906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.437918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.438038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.438050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.438185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.438196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.438317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.438329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.438427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.438438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.438621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.438634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.438762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.438774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.438957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.438968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.439138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.439154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.439333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.991 [2024-10-06 11:30:23.439345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.991 qpair failed and we were unable to recover it. 00:35:25.991 [2024-10-06 11:30:23.439512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.439523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.439636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.439648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.439750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.439762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.439945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.439956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.440189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.440201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.440395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.440406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.440641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.440653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.440769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.440782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.440896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.440907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.441048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.441064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.441250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.441262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.441373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.441385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.441551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.441563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.441814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.441826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.441996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.442008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.442188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.442200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.442290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.442300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.442483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.442494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.442677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.442689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.442874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.442885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.443014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.443026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.443195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.443207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.443400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.443412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.443541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.443553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.443660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.443672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.443940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.443952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.444121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.444133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.444240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.444251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.444446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.444458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.444635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.444646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.444757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.444769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.444887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.444900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.445011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.445022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.445278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.445291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.445476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.445488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.445601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.445613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.445739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.445751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.445937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.445950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.446143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.446156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.446329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.446341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.446451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.446462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.992 [2024-10-06 11:30:23.446648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.992 [2024-10-06 11:30:23.446659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.992 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.446760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.446771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.447056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.447071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.447322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.447334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.447520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.447531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.447653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.447664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.447919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.447934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.448063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.448074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.448238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.448250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.448489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.448501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.448571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.448581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.448786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.448798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.448983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.448994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.449155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.449168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.449420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.449432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.449668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.449680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.449842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.449854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.450014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.450025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.450266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.450277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.450373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.450385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.450500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.450512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.450644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.450656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.450727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.450737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.450815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.450825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.450960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.450972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.451078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.451100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.451187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.451197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.451305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.451315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.451489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.451501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.451616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.451628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.451824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.451837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.452037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.452049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.452153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.452175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.452366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.452378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.452545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.452556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.452723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.452735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.452860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.452872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.452988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.453000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.453190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.453202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.453459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.453471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.453726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.453738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.453836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.453847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.454024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.993 [2024-10-06 11:30:23.454037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.993 qpair failed and we were unable to recover it. 00:35:25.993 [2024-10-06 11:30:23.454163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.454174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.454290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.454301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.454407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.454419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.454579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.454592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.454681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.454692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.454922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.454936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.455116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.455128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.455360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.455372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.455480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.455491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.455680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.455692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.455883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.455895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.456066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.456078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.456245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.456256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.456385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.456397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.456512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.456524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.456664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.456676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.456803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.456815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.457052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.457067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.457257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.457269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.457526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.457538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.457789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.457801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.457976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.457988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.458174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.458187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.458313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.458325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.458448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.458460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.458542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.458554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.458679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.458691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.458943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.458955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.459121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.459133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.459306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.459319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.459523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.459535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.459657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.459669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.459957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.459969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.460104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.460118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.460235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.460246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.460370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.460382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.460496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.460508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.460695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.460706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.994 [2024-10-06 11:30:23.460966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.994 [2024-10-06 11:30:23.460978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.994 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.461101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.461113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.461303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.461314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.461408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.461420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.461656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.461668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.461766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.461779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.461910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.461922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.462085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.462097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.462272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.462284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.462493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.462504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.462754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.462766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.462870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.462883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.463076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.463088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.463264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.463277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.463411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.463423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.463603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.463615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.463733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.463745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.463910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.463921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.464088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.464101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.464219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.464232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.464397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.464408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.464523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.464536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.464646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.464659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.464832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.464845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.465045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.465057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.465225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.465237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.995 [2024-10-06 11:30:23.465349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.995 [2024-10-06 11:30:23.465360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.995 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.465540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.465552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.465783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.465795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.465975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.465987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.466220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.466233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.466347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.466358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.466627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.466639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.466819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.466832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.467074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.467093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.467282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.467294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.467428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.467441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.467624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.467636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.467890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.467901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.468013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.468025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.468137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.468149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.468330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.468341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.468519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.468532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.468616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.468627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.468794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.468805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.469038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.469052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.469296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.469308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.469476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.469487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.469675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.469688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.469890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.469902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.470008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.470020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.470129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.470142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.470317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.470329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.470489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.470502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.470680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.470692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.470815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.470827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.470992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.471005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.471110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.471120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.471283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.471295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.471484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.471496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.471659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.471670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.471847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.471860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.472127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.996 [2024-10-06 11:30:23.472139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.996 qpair failed and we were unable to recover it. 00:35:25.996 [2024-10-06 11:30:23.472322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.472333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.472512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.472524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.472639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.472650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.472839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.472851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.472975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.472987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.473083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.473093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.473222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.473234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.473406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.473417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.473514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.473527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.473670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.473704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.473896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.473920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.474050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.474079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.474197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.474210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.474404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.474415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.474600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.474612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.474724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.474737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.474969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.474981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.475188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.475200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.475436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.475448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.475565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.475577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.475808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.475819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.476002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.476014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.476274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.476288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.476491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.476503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.476632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.476644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.476757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.476769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.476875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.476887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.477063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.477076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.477196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.477208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.477311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.477323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.477506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.477518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.477805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.477817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.477997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.478008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.478243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.478256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.478458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.478470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.478582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.478594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.478708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.997 [2024-10-06 11:30:23.478720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.997 qpair failed and we were unable to recover it. 00:35:25.997 [2024-10-06 11:30:23.478898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.478910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.479093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.479105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.479220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.479232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.479405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.479418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.479534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.479546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.479721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.479732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.479846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.479859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.480022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.480034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.480223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.480236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.480352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.480363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.480536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.480547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.480806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.480818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.480936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.480958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.481242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.481262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.481387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.481404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.481625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.481643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.481906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.481923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.482125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.482143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.482265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.482278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.482432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.482444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.482608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.482620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.482802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.482815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.482989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.483000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.483104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.483115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.483288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.483301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.483483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.483497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.483691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.483702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.483950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.483963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.484212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.484224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.484391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.484404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.484605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.484617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.484744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.484756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.484989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.485001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.485218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.485231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.485361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.485374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.485545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.485557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.485656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.998 [2024-10-06 11:30:23.485669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.998 qpair failed and we were unable to recover it. 00:35:25.998 [2024-10-06 11:30:23.485904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.485916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.486097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.486110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.486283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.486295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.486404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.486416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.486593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.486606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.486783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.486795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.486981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.486993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.487223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.487235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.487343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.487356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.487521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.487533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.487735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.487747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.487922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.487934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.488100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.488113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.488302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.488314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.488572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.488583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.488801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.488821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.489019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.489036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.489295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.489313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.489414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.489431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.489704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.489722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.489967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.489984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.490213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.490227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.490350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.490361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.490483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.490495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.490736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.490748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.490942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.490953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.491067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.491079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.491279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.491291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.491489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.491501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.491616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.491628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:25.999 [2024-10-06 11:30:23.491796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.999 [2024-10-06 11:30:23.491808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:25.999 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.491990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.492002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.492167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.492179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.492428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.492441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.492626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.492638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.492759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.492771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.492958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.492970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.493200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.493212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.493431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.493443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.493619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.493631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.493833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.493844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.494114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.494126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.494312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.494324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.494442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.494453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.494618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.494631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.494814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.494826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.494939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.494951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.495134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.495147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.495334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.495346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.495463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.495475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.495716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.495728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.495933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.495945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.496108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.496121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.496352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.496364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.496499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.496511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.496624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.496639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.496892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.496904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.497089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.497102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.497359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.497371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.000 [2024-10-06 11:30:23.497564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.000 [2024-10-06 11:30:23.497577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.000 qpair failed and we were unable to recover it. 00:35:26.283 [2024-10-06 11:30:23.497689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.283 [2024-10-06 11:30:23.497701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.283 qpair failed and we were unable to recover it. 00:35:26.283 [2024-10-06 11:30:23.497883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.283 [2024-10-06 11:30:23.497895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.283 qpair failed and we were unable to recover it. 00:35:26.283 [2024-10-06 11:30:23.498008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.283 [2024-10-06 11:30:23.498021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.283 qpair failed and we were unable to recover it. 00:35:26.283 [2024-10-06 11:30:23.498208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.283 [2024-10-06 11:30:23.498220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.283 qpair failed and we were unable to recover it. 00:35:26.283 [2024-10-06 11:30:23.498403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.283 [2024-10-06 11:30:23.498416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.283 qpair failed and we were unable to recover it. 00:35:26.283 [2024-10-06 11:30:23.498612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.283 [2024-10-06 11:30:23.498623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.283 qpair failed and we were unable to recover it. 00:35:26.283 [2024-10-06 11:30:23.498807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.283 [2024-10-06 11:30:23.498819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.283 qpair failed and we were unable to recover it. 00:35:26.283 [2024-10-06 11:30:23.499004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.283 [2024-10-06 11:30:23.499015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.283 qpair failed and we were unable to recover it. 00:35:26.283 [2024-10-06 11:30:23.499103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.283 [2024-10-06 11:30:23.499114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.283 qpair failed and we were unable to recover it. 00:35:26.283 [2024-10-06 11:30:23.499232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.283 [2024-10-06 11:30:23.499242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.283 qpair failed and we were unable to recover it. 00:35:26.283 [2024-10-06 11:30:23.499410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.283 [2024-10-06 11:30:23.499422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.283 qpair failed and we were unable to recover it. 00:35:26.283 [2024-10-06 11:30:23.499550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.283 [2024-10-06 11:30:23.499562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.283 qpair failed and we were unable to recover it. 00:35:26.283 [2024-10-06 11:30:23.499686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.283 [2024-10-06 11:30:23.499698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.283 qpair failed and we were unable to recover it. 00:35:26.283 [2024-10-06 11:30:23.499934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.283 [2024-10-06 11:30:23.499946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.283 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.500120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.500133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.500256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.500268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.500388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.500401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.500597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.500608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.500717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.500729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.500863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.500874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.501103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.501115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.501225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.501236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.501400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.501413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.501605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.501617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.501817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.501830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.502001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.502013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.502180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.502192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.502385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.502397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.502574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.502585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.502816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.502828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.502945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.502957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.503119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.503132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.503244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.503256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.503380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.503392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.503500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.503511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.503624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.503638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.503897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.503910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.504142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.504154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.504334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.504346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.504514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.504525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.504695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.504708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.505027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.505039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.505142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.505153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.505333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.505345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.505469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.505481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.505658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.505669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.505916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.505929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.506091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.506103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.506283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.506295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.506469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.284 [2024-10-06 11:30:23.506480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.284 qpair failed and we were unable to recover it. 00:35:26.284 [2024-10-06 11:30:23.506598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.506609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.506866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.506878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.507098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.507110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.507221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.507233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.507332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.507343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.507440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.507450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.507550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.507562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.507716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.507728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.507915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.507927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.508101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.508113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.508253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.508263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.508383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.508395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.508562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.508575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.508808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.508820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.509077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.509090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.509168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.509178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.509376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.509388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.509596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.509608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.509722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.509734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.509917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.509929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.510055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.510069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.510208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.510220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.510454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.510465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.510590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.510602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.510732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.510745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.510865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.510882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.511073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.511085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.511208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.511221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.511324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.511336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.511537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.511549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.511646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.511657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.511912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.511925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.512129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.512141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.512322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.512334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.512500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.512513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.512684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.512697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.512801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.285 [2024-10-06 11:30:23.512812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.285 qpair failed and we were unable to recover it. 00:35:26.285 [2024-10-06 11:30:23.512984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.512997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.513181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.513193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.513372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.513384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.513565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.513578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.513690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.513702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.513820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.513832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.514077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.514088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.514265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.514277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.514485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.514498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.514663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.514674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.514817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.514829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.515014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.515026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.515140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.515152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.515278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.515291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.515362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.515372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.515553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.515565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.515671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.515683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.515861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.515874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.516105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.516117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.516285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.516297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.516474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.516485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.516601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.516613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.516777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.516789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.517044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.517056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.517230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.517243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.517477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.517489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.517653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.517665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.517777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.517789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.518043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.518057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.518230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.518242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.518373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.518384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.518635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.518647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.518755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.518767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.519023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.519035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.519316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.519328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.519426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.519438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.286 [2024-10-06 11:30:23.519565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.286 [2024-10-06 11:30:23.519577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.286 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.519776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.519789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.520020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.520032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.520152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.520165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.520277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.520288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.520408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.520420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.520534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.520546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.520650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.520661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.520822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.520834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.521088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.521100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.521298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.521309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.521509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.521521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.521643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.521654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.521762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.521772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.521892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.521904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.521987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.521997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.522107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.522118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.522310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.522322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.522578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.522590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.522783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.522795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.522904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.522916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.523087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.523098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.523296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.523309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.523418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.523430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.523597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.523609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.523793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.523805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.523997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.524009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.524259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.524272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.524487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.524499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.524675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.524687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.524853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.524865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.525117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.525129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.525297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.525310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.525569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.525581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.525692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.525704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.525890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.525903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.526042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.526054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.287 qpair failed and we were unable to recover it. 00:35:26.287 [2024-10-06 11:30:23.526185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.287 [2024-10-06 11:30:23.526197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.526327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.526339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.526539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.526551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.526727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.526739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.526850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.526862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.526958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.526970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.527146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.527164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.527351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.527363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.527623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.527635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.527758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.527770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.527937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.527950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.528121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.528132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.528248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.528260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.528375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.528388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.528569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.528580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.528792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.528803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.528917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.528929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.529101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.529114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.529291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.529302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.529515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.529528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.529656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.529668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.529776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.529787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.530028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.530041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.530175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.530187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.530442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.530454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.530620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.530631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.530798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.530810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.530987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.530998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.531108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.531121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.531307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.288 [2024-10-06 11:30:23.531319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.288 qpair failed and we were unable to recover it. 00:35:26.288 [2024-10-06 11:30:23.531485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.531497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.531672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.531684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.531809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.531820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.531991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.532003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.532116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.532128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.532294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.532307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.532446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.532458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.532635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.532647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.532807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.532818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.532986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.532998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.533180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.533193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.533319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.533330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.533476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.533488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.533680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.533692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.533799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.533811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.533985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.533997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.534164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.534177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.534351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.534363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.534498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.534510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.534687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.534699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.534959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.534971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.535217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.535229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.535356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.535367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.535483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.535494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.535593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.535605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.535769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.535782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.535895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.535906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.536091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.536104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.536287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.536299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.536399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.536411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.536525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.536537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.536703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.536716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.536868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.536880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.537001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.537014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.537247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.537259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.537365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.537377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.537642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.289 [2024-10-06 11:30:23.537654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.289 qpair failed and we were unable to recover it. 00:35:26.289 [2024-10-06 11:30:23.537830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.537842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.537975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.537986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.538135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.538148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.538322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.538334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.538593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.538605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.538715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.538726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.538901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.538913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.539108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.539120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.539299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.539313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.539444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.539456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.539643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.539655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.539756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.539766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.539955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.539967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.540092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.540103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.540224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.540235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.540350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.540362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.540648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.540660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.540775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.540787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.540898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.540909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.541170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.541182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.541298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.541310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.541454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.541466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.541644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.541657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.541820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.541832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.541928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.541940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.542065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.542077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.542241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.542253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.542416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.542428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.542539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.542550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.542628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.542638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.542894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.542905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.543133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.543145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.543325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.543337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.543457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.543469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.543655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.543666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.543836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.543847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.543971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.290 [2024-10-06 11:30:23.543983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.290 qpair failed and we were unable to recover it. 00:35:26.290 [2024-10-06 11:30:23.544151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.544164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.544328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.544340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.544470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.544482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.544731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.544744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.544935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.544947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.545114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.545127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.545386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.545398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.545631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.545643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.545831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.545842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.545961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.545973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.546135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.546147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.546335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.546350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.546477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.546488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.546605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.546616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.546809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.546821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.547055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.547133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.547235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.547246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.547427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.547439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.547612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.547623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.547822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.547835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.548013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.548025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.548162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.548174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.548360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.548373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.548604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.548617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.548722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.548734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.548862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.548874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.548985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.548996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.549096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.549107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.549220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.549230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.549401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.549413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.549580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.549592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.549753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.549764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.549969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.549980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.550143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.550156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.550278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.550290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.550497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.291 [2024-10-06 11:30:23.550509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.291 qpair failed and we were unable to recover it. 00:35:26.291 [2024-10-06 11:30:23.550708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.550720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.550904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.550915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.551088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.551101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.551225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.551237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.551424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.551436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.551612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.551624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.551721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.551733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.551919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.551931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.552039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.552051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.552230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.552243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.552414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.552426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.552593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.552605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.552714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.552725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.552924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.552936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.553115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.553127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.553305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.553319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.553420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.553432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.553561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.553573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.553773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.553785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.553912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.553924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.554020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.554031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.554158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.554171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.554293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.554305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.554419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.554431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.554596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.554608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.554780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.554793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.554878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.554888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.555106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.555118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.555216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.555228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.555355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.555366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.555568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.555580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.555778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.555790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.555903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.555915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.556018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.556029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.556158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.556175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.556350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.556362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.556525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.292 [2024-10-06 11:30:23.556537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.292 qpair failed and we were unable to recover it. 00:35:26.292 [2024-10-06 11:30:23.556701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.556712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.556831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.556843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.557030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.557042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.557214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.557226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.557408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.557419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.557528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.557541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.557705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.557717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.557836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.557849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.558031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.558043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.558211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.558224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.558392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.558404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.558517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.558529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.558635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.558646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.558900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.558912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.559077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.559090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.559273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.559285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.559406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.559419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.559523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.559535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.559725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.559737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.559994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.560006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.560117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.560130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.560313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.560325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.560438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.560450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.560633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.560645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.560819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.560831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.560997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.561008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.561188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.561201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.561327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.561339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.561572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.561584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.561742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.561755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.561986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.561998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.562116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.562128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.293 [2024-10-06 11:30:23.562304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.293 [2024-10-06 11:30:23.562316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.293 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.562504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.562515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.562702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.562714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.562888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.562900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.563136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.563148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.563313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.563325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.563560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.563573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.563761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.563773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.563954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.563966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.564092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.564104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.564275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.564287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.564462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.564474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.564647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.564659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.564859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.564873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.565079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.565091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.565281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.565293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.565469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.565480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.565660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.565672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.565927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.565940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.566071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.566083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.566207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.566219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.566380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.566393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.566486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.566496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.566663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.566676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.566859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.566871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.567069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.567081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.567188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.567201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.567374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.567387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.567547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.567560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.567817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.567830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.567941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.567953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.568143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.568156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.568413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.568426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.568546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.568558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.568795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.568808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.568933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.568945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.569062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.569074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.294 qpair failed and we were unable to recover it. 00:35:26.294 [2024-10-06 11:30:23.569307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.294 [2024-10-06 11:30:23.569320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.569437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.569450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.569638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.569650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.569850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.569862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.570044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.570055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.570231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.570243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.570351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.570363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.570483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.570496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.570672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.570684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.570796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.570807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.570923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.570935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.571035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.571047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.571190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.571212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.571347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.571364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.571554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.571572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.571761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.571775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.571907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.571922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.572039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.572050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.572228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.572241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.572436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.572448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.572545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.572556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.572799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.572812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.573037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.573049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.573222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.573234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.573404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.573416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.573585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.573596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.573788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.573800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.573988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.574001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.574148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.574160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.574344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.574356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.574482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.574494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.574684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.574695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.574955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.574968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.575130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.575142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.575324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.575336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.575590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.575601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.575729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.575741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.295 qpair failed and we were unable to recover it. 00:35:26.295 [2024-10-06 11:30:23.575941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.295 [2024-10-06 11:30:23.575953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.576075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.576088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.576319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.576331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.576551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.576563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.576729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.576740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.576900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.576912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.577041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.577053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.577203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.577215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.577327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.577339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.577445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.577455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.577567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.577579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.577739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.577751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.577986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.577998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.578164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.578176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.578345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.578357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.578483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.578495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.578611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.578623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.578731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.578743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.578882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.578895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.579075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.579089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.579257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.579269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.579466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.579478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.579645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.579657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.579761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.579773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.579983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.579997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.580120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.580133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.580270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.580282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.580477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.580489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.580623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.580634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.580834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.580845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.581089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.581102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.581218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.581229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.581481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.581493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.581695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.581706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.581814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.581826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.581963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.581975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.582083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.296 [2024-10-06 11:30:23.582094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.296 qpair failed and we were unable to recover it. 00:35:26.296 [2024-10-06 11:30:23.582268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.582280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.582442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.582454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.582690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.582703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.582887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.582899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.583016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.583029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.583145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.583157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.583336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.583348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.583461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.583473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.583706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.583718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.583835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.583847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.584086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.584099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.584207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.584219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.584408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.584421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.584613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.584624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.584729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.584740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.584854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.584865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.585068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.585080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.585188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.585199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.585451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.585463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.585645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.585657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.585767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.585778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.585949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.585961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.586075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.586089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.586219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.586231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.586485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.586496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.586624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.586636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.586746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.586757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.586875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.586886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.587002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.587014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.587175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.587187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.587379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.587390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.587513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.587525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.297 [2024-10-06 11:30:23.587691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.297 [2024-10-06 11:30:23.587703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.297 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.587869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.587880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.588044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.588056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.588245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.588258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.588435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.588446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.588622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.588635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.588809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.588821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.588921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.588931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.589107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.589120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.589217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.589229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.589466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.589478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.589596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.589607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.589771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.589783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.589925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.589937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.590056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.590080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.590261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.590273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.590448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.590460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.590641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.590653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.590777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.590790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.591024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.591036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.591269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.591281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.591446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.591458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.591550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.591560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.591774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.591786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.592047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.592062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.592163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.592173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.592362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.592374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.592540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.592552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.592712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.592724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.593030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.593042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.593230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.593245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.593496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.593507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.593718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.593729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.593908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.593919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.594108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.594120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.594307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.594319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.298 [2024-10-06 11:30:23.594427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.298 [2024-10-06 11:30:23.594439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.298 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.594693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.594705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.594961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.594974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.595158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.595170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.595295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.595308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.595419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.595431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.595551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.595563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.595823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.595834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.596045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.596057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.596162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.596173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.596373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.596386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.596552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.596564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.596815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.596827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.597004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.597016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.597117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.597130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.597243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.597254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.597487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.597499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.597733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.597745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.597919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.597930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.598040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.598051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.598165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.598178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.598360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.598371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.598538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.598549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.598730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.598742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.598919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.598930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.599139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.599152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.599434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.599446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.599634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.599646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.599827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.599838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.600039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.600051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.600183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.600195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.600431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.600443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.600608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.600620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.600733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.600745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.600933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.600946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.601109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.601121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.601220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.601231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.299 [2024-10-06 11:30:23.601404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.299 [2024-10-06 11:30:23.601417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.299 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.601591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.601603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.601774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.601785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.601888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.601899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.601999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.602010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.602142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.602153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.602430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.602442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.602570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.602581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.602677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.602689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.602860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.602872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.603078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.603090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.603205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.603216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.603423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.603435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.603609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.603621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.603743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.603754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.603881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.603894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.604148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.604160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.604271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.604282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.604424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.604436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.604559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.604572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.604682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.604694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.604858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.604871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.605048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.605064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.605301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.605313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.605571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.605583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.605770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.605781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.606028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.606040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.606239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.606251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.606365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.606377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.606489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.606501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.606592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.606603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.606718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.606728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.606911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.606923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.607053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.607071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.607180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.607193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.607304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.607316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.607545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.607557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.300 qpair failed and we were unable to recover it. 00:35:26.300 [2024-10-06 11:30:23.607672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.300 [2024-10-06 11:30:23.607686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.607926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.607938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.608047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.608062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.608301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.608313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.608433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.608446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.608564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.608575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.608739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.608751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.608934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.608946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.609123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.609137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.609318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.609329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.609495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.609507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.609741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.609753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.609851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.609862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.610036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.610048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.610241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.610253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.610359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.610369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.610572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.610585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.610771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.610783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.610910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.610920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.611155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.611167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.611288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.611300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.611482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.611494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.611620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.611632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.611705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.611717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.611843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.611855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.612031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.612043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.612207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.612219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.612453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.612472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.612677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.612694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.612832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.612848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.613037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.613055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.613165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.613182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.613377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.613394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.613576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.613591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.613769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.613780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.613873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.613884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.614011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.301 [2024-10-06 11:30:23.614022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.301 qpair failed and we were unable to recover it. 00:35:26.301 [2024-10-06 11:30:23.614279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.614291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.614459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.614472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.614553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.614564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.614727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.614742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.614873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.614885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.614977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.614989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.615195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.615208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.615378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.615390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.615638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.615650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.615900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.615912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.616091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.616103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.616303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.616315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.616400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.616410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.616516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.616529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.616709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.616720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.616815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.616827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.616934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.616946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.617209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.617221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.617384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.617396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.617580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.617592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.617769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.617780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.618006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.618018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.618136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.618148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.618287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.618298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.618526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.618538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.618710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.618722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.618903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.618915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.619082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.619094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.619207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.619219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.619393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.619406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.619578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.619589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.619717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.619729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.302 [2024-10-06 11:30:23.619921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.302 [2024-10-06 11:30:23.619933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.302 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.620043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.620053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.620181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.620193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.620315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.620326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.620414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.620424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.620542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.620554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.620724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.620736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.620926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.620938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.621198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.621210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.621486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.621498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.621730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.621742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.621904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.621918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.622087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.622099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.622210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.622222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.622381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.622392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.622504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.622516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.622631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.622643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.622825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.622837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.623089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.623101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.623367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.623379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.623644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.623656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.623840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.623851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.624030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.624042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.624173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.624185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.624309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.624321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.624568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.624580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.624707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.624719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.624895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.624907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.625103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.625115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.625230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.625242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.625367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.625379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.625612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.625624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.625802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.625814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.626004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.626016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.626201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.626214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.626338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.626350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.626552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.626564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.303 [2024-10-06 11:30:23.626737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.303 [2024-10-06 11:30:23.626749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.303 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.626869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.626881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.626991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.627002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.627184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.627196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.627294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.627306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.627433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.627445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.627608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.627620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.627816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.627828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.628000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.628012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.628124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.628136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.628307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.628318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.628441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.628453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.628653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.628665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.628846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.628858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.629033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.629046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.629158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.629170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.629277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.629289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.629481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.629493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.629605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.629618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.629809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.629820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.629983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.629995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.630169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.630182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.630308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.630322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.630453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.630465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.630539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.630549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.630735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.630748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.630980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.630992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.631107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.631119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.631285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.631297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.631478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.631490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.631605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.631616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.631808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.631821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.631927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.631939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.632127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.632139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.632321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.632333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.632512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.632523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.632648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.304 [2024-10-06 11:30:23.632659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.304 qpair failed and we were unable to recover it. 00:35:26.304 [2024-10-06 11:30:23.632844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.632856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.632942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.632952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.633150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.633162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.633350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.633362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.633542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.633555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.633735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.633748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.633874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.633885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.634006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.634018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.634217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.634229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.634333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.634345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.634524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.634536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.634768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.634780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.635056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.635072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.635248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.635259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.635375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.635388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.635560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.635571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.635687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.635698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.635871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.635885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.636076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.636087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.636321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.636334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.636458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.636470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.636591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.636603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.636723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.636735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.636952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.636964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.637213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.637225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.637325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.637336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.637569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.637581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.637815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.637828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.637999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.638011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.638189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.638202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.638331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.638344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.638602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.638614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.638732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.638745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.638885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.638896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.639019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.639031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.639177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.639190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.639418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.305 [2024-10-06 11:30:23.639430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.305 qpair failed and we were unable to recover it. 00:35:26.305 [2024-10-06 11:30:23.639612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.639624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.639723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.639735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.639832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.639844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.640021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.640033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.640131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.640142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.640327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.640340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.640438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.640448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.640626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.640638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.640791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.640802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.640995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.641007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.641137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.641149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.641261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.641272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.641525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.641537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.641656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.641667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.641781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.641793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.642031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.642044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.642244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.642257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.642383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.642395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.642571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.642583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.642751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.642764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.642940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.642953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.643190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.643203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.643364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.643375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.643548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.643560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.643689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.643700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.643891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.643902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.644077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.644090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.644347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.644360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.644530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.644542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.644644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.644656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.644760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.644772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.644885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.644897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.645082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.645095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.645329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.645341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.645443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.645454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.645655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.645667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.306 [2024-10-06 11:30:23.645924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.306 [2024-10-06 11:30:23.645937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.306 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.646066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.646079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.646193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.646204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.646372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.646385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.646508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.646520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.646701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.646713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.646803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.646814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.646908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.646919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.647154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.647166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.647285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.647297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.647578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.647590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.647849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.647862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.647978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.647990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.648102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.648116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.648253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.648264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.648524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.648537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.648723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.648735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.648845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.648858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.648985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.648996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.649096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.649107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.649236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.649248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.649427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.649440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.649547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.649559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.649726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.649738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.649914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.649926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.650095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.650107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.650335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.650346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.650543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.650556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.650739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.650752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.650861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.650873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.650956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.650966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.651075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.651087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.651367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.307 [2024-10-06 11:30:23.651379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.307 qpair failed and we were unable to recover it. 00:35:26.307 [2024-10-06 11:30:23.651530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.651542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.651639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.651651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.651758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.651769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.651884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.651896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.652070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.652083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.652205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.652217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.652459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.652471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.652645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.652656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.652770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.652782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.652903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.652915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.653084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.653097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.653284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.653295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.653460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.653472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.653582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.653594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.653826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.653838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.653940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.653952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.654156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.654168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.654354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.654366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.654539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.654553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.654723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.654735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.654901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.654913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.655169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.655181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.655313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.655324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.655517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.655529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.655712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.655724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.655887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.655899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.656107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.656119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.656226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.656238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.656354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.656366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.656486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.656499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.656681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.656693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.656809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.656821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.657097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.657110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.657206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.657217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.657389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.657401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.308 [2024-10-06 11:30:23.657517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.308 [2024-10-06 11:30:23.657529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.308 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.657626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.657638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.657736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.657748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.657868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.657879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.657983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.657995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.658176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.658189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.658374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.658386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.658573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.658585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.658687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.658698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.658875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.658887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.659096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.659108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.659272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.659284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.659415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.659427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.659659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.659672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.659795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.659807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.659979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.659991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.660260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.660273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.660433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.660445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.660547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.660558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.660815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.660827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.660929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.660939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.661120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.661133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.661364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.661376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.661465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.661479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.661655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.661666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.661878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.661890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.662082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.662093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.662254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.662266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.662496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.662508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.662606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.662616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.662733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.662745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.662851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.662862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.663031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.663044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.663227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.663240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.663414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.663426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.663590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.663602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.663781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.663794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.309 [2024-10-06 11:30:23.663959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.309 [2024-10-06 11:30:23.663971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.309 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.664083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.664095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.664295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.664307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.664501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.664514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.664642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.664654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.664829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.664842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.665019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.665031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.665210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.665222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.665319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.665329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.665490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.665502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.665683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.665695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.665878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.665890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.666064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.666077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.666263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.666276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.666395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.666407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.666592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.666604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.666798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.666810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.666971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.666983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.667235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.667248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.667429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.667441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.667731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.667743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.667858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.667870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.668048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.668071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.668260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.668272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.668503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.668515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.668679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.668691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.668793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.668807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.668941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.668954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.669143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.669155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.669428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.669440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.669577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.669587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.669767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.669779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.669885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.669897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.670061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.670074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.670257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.670269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.670360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.670371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.670482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.670494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.310 qpair failed and we were unable to recover it. 00:35:26.310 [2024-10-06 11:30:23.670727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.310 [2024-10-06 11:30:23.670739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.670924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.670936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.671118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.671130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.671364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.671377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.671537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.671548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.671720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.671732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.672007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.672019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.672139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.672151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.672254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.672266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.672446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.672458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.672638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.672650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.672769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.672782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.672910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.672921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.672995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.673005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.673196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.673207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.673445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.673457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.673573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.673585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.673694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.673706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.673812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.673825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.673929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.673941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.674205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.674218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.674344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.674355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.674541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.674554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.674630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.674641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.674751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.674762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.674922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.674933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.675051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.675066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.675177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.675189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.675306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.675318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.675439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.675453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.675684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.675696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.675928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.675940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.676125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.311 [2024-10-06 11:30:23.676138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.311 qpair failed and we were unable to recover it. 00:35:26.311 [2024-10-06 11:30:23.676301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.676313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.676440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.676453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.676638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.676650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.676766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.676777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.676939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.676952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.677185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.677197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.677316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.677328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.677529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.677541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.677651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.677663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.677846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.677858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.678037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.678049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.678221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.678233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.678356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.678369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.678480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.678492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.678660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.678672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.678809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.678821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.678991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.679004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.679106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.679118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.679316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.679328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.679583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.679595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.679711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.679723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.679991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.680003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.680118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.680130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.680229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.680241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.680350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.680362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.680565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.680578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.680752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.680764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.681022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.681034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.681292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.681304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.681483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.681495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.681611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.681623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.681822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.681834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.682002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.682014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.682128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.682139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.682382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.682395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.682635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.682647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.682823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.312 [2024-10-06 11:30:23.682837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.312 qpair failed and we were unable to recover it. 00:35:26.312 [2024-10-06 11:30:23.683034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.683046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.683189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.683222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.683401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.683433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.683683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.683717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.683871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.683904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.684128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.684163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.684329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.684361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.684512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.684544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.684725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.684757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.684921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.684954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.685199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.685232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.685446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.685479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.685724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.685757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.686043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.686085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.686256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.686289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.686417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.686451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.686739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.686772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.687076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.687109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.687265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.687298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.687448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.687459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.687663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.687696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.687923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.687955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.688183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.688217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.688448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.688481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.688637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.688670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.688999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.689034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.689276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.689288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.689475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.689506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.689756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.689789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.689929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.689960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.690265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.690299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.690514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.690526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.690776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.690787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.691001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.691033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.691286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.691318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.691570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.691605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.313 [2024-10-06 11:30:23.691812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.313 [2024-10-06 11:30:23.691845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.313 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.692127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.692162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.692302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.692334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.692559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.692597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.692815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.692848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.693082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.693114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.693340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.693351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.693541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.693574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.693821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.693854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.694075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.694109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.694325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.694358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.694585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.694618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.694859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.694892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.695116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.695150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.695338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.695371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.695543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.695576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.695742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.695775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.695990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.696002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.696268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.696281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.696450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.696483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.696759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.696793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.697028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.697071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.697300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.697333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.697614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.697626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.697828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.697839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.698042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.698084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.698318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.698352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.698514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.698547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.698769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.698802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.699035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.699076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.699385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.699419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.699643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.699676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.699887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.699921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.700104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.700139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.700440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.700451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.700613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.700625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.700747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.700773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.314 [2024-10-06 11:30:23.701011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.314 [2024-10-06 11:30:23.701045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.314 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.701250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.701262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.701460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.701494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.701726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.701759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.701985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.702018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.702311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.702346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.702493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.702507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.702680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.702713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.702829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.702863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.703076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.703110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.703363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.703375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.703548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.703560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.703687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.703699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.703877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.703911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.704133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.704169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.704453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.704485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.704649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.704683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.704984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.705016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.705181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.705215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.705426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.705459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.705743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.705776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.706066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.706097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.706287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.706299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.706396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.706407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.706686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.706720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.706936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.706969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.707276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.707310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.707457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.707491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.707617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.707650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.707898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.707931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.708208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.708244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.708399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.708431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.708610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.708644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.708855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.708929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.709167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.709188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.709481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.709514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.709696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.709730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.315 qpair failed and we were unable to recover it. 00:35:26.315 [2024-10-06 11:30:23.709896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.315 [2024-10-06 11:30:23.709928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.710160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.710177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.710373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.710386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.710599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.710632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.710846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.710880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.711024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.711056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.711276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.711288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.711452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.711463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.711580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.711612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.711834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.711872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.712039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.712098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.712275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.712288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.712416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.712449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.712627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.712659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.712885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.712918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.713137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.713171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.713389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.713421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.713623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.713635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.713736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.713747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.713921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.713933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.714121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.714134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.714309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.714322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.714503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.714535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.714684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.714718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.715019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.715052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.715209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.715222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.715421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.715452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.715625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.715657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.715959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.715991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.716138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.716151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.716256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.716268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.716380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.716392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.316 [2024-10-06 11:30:23.716512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.316 [2024-10-06 11:30:23.716542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.316 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.716769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.716801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.717013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.717046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.717237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.717270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.717577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.717589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.717756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.717768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.717883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.717917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.718204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.718239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.718456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.718490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.718734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.718766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.718938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.718972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.719184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.719220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.719523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.719556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.719718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.719751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.719888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.719921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.720192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.720226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.720460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.720492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.720659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.720697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.720858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.720890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.721110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.721123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.721293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.721327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.721479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.721513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.721797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.721830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.721994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.722027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.722275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.722314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.722447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.722486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.722705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.722740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.722986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.723019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.723249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.723285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.723519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.723536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.723730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.723748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.723928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.723945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.724141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.724159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.724357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.724393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.724566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.724600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.724848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.724880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.725088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.725123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.317 qpair failed and we were unable to recover it. 00:35:26.317 [2024-10-06 11:30:23.725348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.317 [2024-10-06 11:30:23.725380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.725636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.725647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.725830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.725842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.726036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.726077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.726365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.726399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.726607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.726619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.726782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.726795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.726926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.726937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.727046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.727063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.727180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.727214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.727441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.727473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.727699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.727733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.727909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.727942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.728247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.728280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.728605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.728639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.728784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.728817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.729123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.729156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.729438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.729471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.729769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.729803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.730099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.730133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.730298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.730336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.730559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.730593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.730869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.730902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.731202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.731237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.731449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.731482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.731765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.731798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.732017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.732050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.732296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.732329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.732627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.732659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.732884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.732917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.733129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.733163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.733336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.733369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.733543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.733576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.733784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.733817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.734124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.734158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.734369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.734405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.318 qpair failed and we were unable to recover it. 00:35:26.318 [2024-10-06 11:30:23.734638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.318 [2024-10-06 11:30:23.734650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.734815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.734848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.735128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.735163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.735375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.735387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.735518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.735551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.735828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.735863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.736088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.736121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.736410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.736443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.736655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.736667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.736840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.736851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.737104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.737116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.737192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.737203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.737364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.737374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.737559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.737591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.737895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.737929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.738128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.738162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.738385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.738419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.738668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.738679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.738813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.738826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.739012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.739044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.739210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.739222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.739345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.739357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.739566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.739598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.739873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.739906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.740165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.740181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.740446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.740478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.740707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.740740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.741033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.741074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.741359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.741393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.741605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.741637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.741808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.741842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.742119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.742154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.742436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.742470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.742628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.742661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.742965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.742999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.743222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.743256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.743378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.319 [2024-10-06 11:30:23.743389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.319 qpair failed and we were unable to recover it. 00:35:26.319 [2024-10-06 11:30:23.743556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.743568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.743755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.743789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.744002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.744035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.744234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.744268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.744510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.744544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.744721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.744752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.744976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.745009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.745232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.745266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.745545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.745573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.745811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.745844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.746074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.746108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.746411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.746444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.746610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.746642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.746855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.746888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.747244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.747319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.747534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.747607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.747844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.747880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.748040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.748103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.748232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.748250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.748452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.748484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.748654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.748686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.748938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.748971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.749184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.749203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.749349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.749366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.749659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.749691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.749930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.749962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.750248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.750282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.750579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.750611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.750901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.750935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.751229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.751263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.751426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.751443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.751626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.751658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.751875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.751908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.752037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.752079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.752309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.752326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.752472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.752490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.752706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.752738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.320 qpair failed and we were unable to recover it. 00:35:26.320 [2024-10-06 11:30:23.752899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.320 [2024-10-06 11:30:23.752932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.753160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.753178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.753371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.753403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.753619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.753652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.753825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.753863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.754094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.754130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.754354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.754371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.754582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.754614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.754791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.754823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.755030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.755071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.755299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.755317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.755538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.755570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.755789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.755820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.755987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.756020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.756212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.756230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.756430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.756462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.756627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.756659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.756903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.756936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.757171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.757206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.757352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.757384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.757539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.757557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.757663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.757681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.757976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.757991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.758178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.758190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.758360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.758372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.758545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.758578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.758722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.758756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.758983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.759016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.759236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.759248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.759458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.759492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.759631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.759664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.759900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.759937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.321 [2024-10-06 11:30:23.760092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.321 [2024-10-06 11:30:23.760127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.321 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.760411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.760444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.760730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.760743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.760921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.760933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.761103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.761115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.761209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.761220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.761329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.761340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.761508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.761542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.761698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.761731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.761942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.761974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.762255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.762289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.762443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.762477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.762687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.762721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.762957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.762991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.763273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.763306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.763520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.763531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.763716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.763748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.764004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.764037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.764295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.764329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.764487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.764521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.764693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.764705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.764900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.764934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.765183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.765218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.765515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.765549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.765757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.765791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.766017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.766051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.766245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.766257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.766359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.766391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.766598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.766630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.766842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.766875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.767107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.767141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.767419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.767464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.767605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.767617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.767738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.767749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.767874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.767888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.768147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.768181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.768399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.322 [2024-10-06 11:30:23.768432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.322 qpair failed and we were unable to recover it. 00:35:26.322 [2024-10-06 11:30:23.768653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.768665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.768851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.768863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.769102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.769142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.769307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.769341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.769499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.769532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.769749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.769761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.769893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.769905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.770093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.770127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.770349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.770382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.770595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.770627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.770902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.770936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.771183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.771217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.771446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.771457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.771691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.771703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.771875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.771886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.772057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.772073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.772337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.772370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.772506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.772539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.772757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.772791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.773002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.773036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.773227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.773261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.773547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.773558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.773736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.773748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.773960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.773993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.774207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.774240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.774532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.774566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.774824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.774857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.775015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.775048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.775282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.775315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.775522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.775590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.775804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.775824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.776025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.776043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.776269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.776283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.776389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.776400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.776614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.776647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.776797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.776830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.323 [2024-10-06 11:30:23.777041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.323 [2024-10-06 11:30:23.777081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.323 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.777345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.777357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.777586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.777597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.777719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.777730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.777857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.777868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.778125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.778137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.778325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.778339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.778476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.778509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.778730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.778764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.779076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.779110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.779341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.779374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.779610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.779644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.779909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.779942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.780173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.780208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.780496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.780529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.780755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.780788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.781043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.781085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.781212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.781245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.781470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.781503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.781756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.781768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.781947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.781959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.782152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.782186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.782398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.782431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.782646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.782679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.782863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.782896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.783077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.783111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.783272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.783283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.783521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.783554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.783739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.783773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.784053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.784110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.784265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.784299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.784539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.784572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.784766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.784778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.785013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.785101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.785348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.785385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.785546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.785564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.785766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.785799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.324 qpair failed and we were unable to recover it. 00:35:26.324 [2024-10-06 11:30:23.786023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.324 [2024-10-06 11:30:23.786057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.786307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.786341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.786638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.786673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.786900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.786935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.787094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.787128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.787300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.787334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.787559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.787592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.787843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.787875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.788152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.788187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.788411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.788450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.788766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.788799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.789026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.789069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.789263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.789296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.789595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.789629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.789877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.789911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.790143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.790178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.790479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.790512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.790790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.790823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.790979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.791011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.791313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.791347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.791597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.791638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.791933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.791966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.792141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.792174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.792441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.792475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.792640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.792672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.792921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.792954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.793112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.793148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.793373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.793405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.793620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.793653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.793956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.793989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.794197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.794231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.794457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.794490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.794697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.794730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.794957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.794991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.795292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.795326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.795503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.795537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.795765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.325 [2024-10-06 11:30:23.795778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.325 qpair failed and we were unable to recover it. 00:35:26.325 [2024-10-06 11:30:23.795955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.795990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.796139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.796173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.796480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.796514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.796825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.796858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.797115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.797148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.797300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.797312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.797601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.797633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.797869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.797903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.798142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.798178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.798410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.798422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.798675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.798707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.798931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.798965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.799251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.799295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.799576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.799609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.799810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.799844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.800003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.800037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.800331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.800365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.800664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.800697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.800864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.800897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.801122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.801155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.801316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.801348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.801511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.801544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.801858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.801891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.802203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.802238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.802459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.802491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.802714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.802748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.802939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.802972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.803269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.803304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.803456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.803468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.803586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.803620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.803847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.803880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.804073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.326 [2024-10-06 11:30:23.804106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.326 qpair failed and we were unable to recover it. 00:35:26.326 [2024-10-06 11:30:23.804333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.804366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.804604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.804616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.804780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.804792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.804953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.804965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.805149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.805161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.805340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.805373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.805602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.805635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.805977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.806052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.806314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.806352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.806597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.806630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.806909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.806942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.807175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.807211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.807444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.807477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.807701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.807714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.807904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.807937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.808185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.808220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.808373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.808407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.808669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.808701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.808923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.808956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.809180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.809214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.809367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.809400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.809524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.809537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.809736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.809769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.810007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.810040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.810278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.810312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.810521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.810533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.810745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.810779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.810928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.810960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.811211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.811245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.811468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.811500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.811669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.811703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.811966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.811978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.812176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.812189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.812368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.812401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.812536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.812569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.812874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.812907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.327 [2024-10-06 11:30:23.813152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.327 [2024-10-06 11:30:23.813186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.327 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.813414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.813448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.813743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.813775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.813992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.814025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.814335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.814369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.814581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.814615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.814858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.814891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.815117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.815151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.815365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.815398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.815625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.815637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.815875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.815908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.816154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.816195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.816346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.816378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.816568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.816580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.816838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.816872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.817046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.817087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.817415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.817449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.817679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.817691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.817798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.817810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.817940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.817974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.818265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.818298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.818488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.818500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.818621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.818654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.818890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.818923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.819135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.819170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.819330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.819342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.819582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.819615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.819898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.819932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.820160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.820194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.820439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.820473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.820721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.820754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.820912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.820944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.821229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.821263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.821396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.821407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.821594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.821629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.821866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.821898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.822125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.822159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.822313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.822346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.822532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.822544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.822690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.822724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.328 [2024-10-06 11:30:23.822883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.328 [2024-10-06 11:30:23.822916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.328 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.823085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.823120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.823401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.823434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.823573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.823585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.823709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.823721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.823833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.823845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.824092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.824126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.824342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.824375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.824585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.824618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.824898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.824931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.825159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.825193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.825492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.825531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.825652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.825693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.825998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.826031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.826268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.826300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.826452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.826463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.826641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.826674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.826964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.826997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.827195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.827229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.827382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.827394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.827507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.827517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.827660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.827670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.827861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.827873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.828037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.828082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.828316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.828350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.828655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.828667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.828912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.828945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.829176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.829211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.829440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.829474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.829644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.829656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.829835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.829848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.830073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.830086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.830265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.830277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.830480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.830491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.830607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.830619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.830734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.830767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.830988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.831022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.831244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.831278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.831555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.831567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.831744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.831756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.329 [2024-10-06 11:30:23.832066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.329 [2024-10-06 11:30:23.832078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.329 qpair failed and we were unable to recover it. 00:35:26.330 [2024-10-06 11:30:23.832209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.330 [2024-10-06 11:30:23.832242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.330 qpair failed and we were unable to recover it. 00:35:26.330 [2024-10-06 11:30:23.832547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.330 [2024-10-06 11:30:23.832580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.330 qpair failed and we were unable to recover it. 00:35:26.330 [2024-10-06 11:30:23.832801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.330 [2024-10-06 11:30:23.832834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.330 qpair failed and we were unable to recover it. 00:35:26.330 [2024-10-06 11:30:23.833048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.330 [2024-10-06 11:30:23.833092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.330 qpair failed and we were unable to recover it. 00:35:26.330 [2024-10-06 11:30:23.833345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.330 [2024-10-06 11:30:23.833377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.330 qpair failed and we were unable to recover it. 00:35:26.330 [2024-10-06 11:30:23.833542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.330 [2024-10-06 11:30:23.833555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.330 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.833676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.833689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.833809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.833822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.833923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.833934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.834162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.834198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.834425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.834464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.834677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.834689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.834808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.834819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.835010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.835022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.835233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.835246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.835374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.835386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.835568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.835580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.835729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.835740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.835933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.835966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.836203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.836236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.836441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.836475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.836699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.836733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.836958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.836992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.837207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.837242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.837543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.837584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.837741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.837753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.837870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.837901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.838080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.838111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.838326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.838357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.838506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.838516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.838697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.838708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.838833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.838844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.839028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.839040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.839259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.839271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.839452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.839463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.839567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.839578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.839749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.839761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.839958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.839970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.840081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.840092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.840247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.840259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.840372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.840384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.840578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.840589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.840757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.840767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.617 qpair failed and we were unable to recover it. 00:35:26.617 [2024-10-06 11:30:23.840942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.617 [2024-10-06 11:30:23.840954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.841132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.841145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.841265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.841277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.841441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.841453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.841566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.841578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.841737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.841749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.841932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.841945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.842120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.842134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.842310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.842322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.842519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.842531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.842638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.842649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.842735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.842746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.842864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.842875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.843043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.843054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.843242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.843255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.843422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.843433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.843626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.843638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.843826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.843837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.843951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.843962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.844137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.844149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.844304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.844316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.844498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.844511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.844679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.844691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.844797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.844808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.845006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.845018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.845199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.845212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.845332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.845344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.845511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.845524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.845641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.845653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.845806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.845817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.845983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.845995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.846178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.846190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.846318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.846330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.846461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.846473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.846643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.846655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.846846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.846858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.847031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.847043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.847211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.847222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.847478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.847490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.847611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.847622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.847751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.847763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.847939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.847951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.848142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.618 [2024-10-06 11:30:23.848154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.618 qpair failed and we were unable to recover it. 00:35:26.618 [2024-10-06 11:30:23.848414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.848427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.848600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.848613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.848722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.848734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.848907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.848919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.849034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.849048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.849171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.849182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.849373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.849384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.849561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.849573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.849752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.849764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.849861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.849872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.850111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.850123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.850288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.850300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.850468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.850479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.850591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.850602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.850779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.850791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.850902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.850914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.851015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.851027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.851190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.851202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.851448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.851460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.851568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.851579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.851691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.851703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.851879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.851891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.852150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.852162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.852377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.852388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.852520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.852532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.852641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.852653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.852914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.852926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.853073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.853086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.853340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.853352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.853520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.853531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.853711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.853723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.853900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.853912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.854156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.854168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.854291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.854303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.854488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.854500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.854685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.854697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.854931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.854943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.855052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.855066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.855178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.855191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.855356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.855367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.855599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.855611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.855777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.619 [2024-10-06 11:30:23.855789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.619 qpair failed and we were unable to recover it. 00:35:26.619 [2024-10-06 11:30:23.856048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.856064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.856234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.856245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.856411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.856425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.856538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.856551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.856810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.856822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.857067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.857079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.857186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.857198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.857391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.857402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.857587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.857599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.857750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.857761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.857874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.857885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.858067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.858079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.858263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.858275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.858462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.858473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.858635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.858647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.858913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.858925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.859040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.859052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.859152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.859162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.859336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.859347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.859521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.859533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.859766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.859778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.859961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.859972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.860155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.860167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.860356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.860368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.860491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.860503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.860611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.860623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.860806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.860818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.860984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.860996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.861163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.861175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.861384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.861396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.861479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.861490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.861611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.861622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.861788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.861800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.862003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.862015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.862125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.862137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.862392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.862403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.862530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.862542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.862744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.862755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.862946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.862958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.863140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.863152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.863336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.863348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.863524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.863536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.863710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.620 [2024-10-06 11:30:23.863723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.620 qpair failed and we were unable to recover it. 00:35:26.620 [2024-10-06 11:30:23.863899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.863911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.864075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.864087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.864172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.864183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.864351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.864362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.864545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.864557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.864628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.864639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.864743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.864754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.864845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.864857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.865024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.865036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.865228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.865240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.865528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.865540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.865705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.865716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.865852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.865864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.866036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.866048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.866168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.866180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.866378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.866390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.866586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.866597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.866726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.866738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.866831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.866841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.867008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.867021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.867208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.867220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.867350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.867361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.867549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.867561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.867669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.867681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.867874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.867885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.867996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.868008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.868256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.868294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.868457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.868495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.868648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.868667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.868816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.868833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.868951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.868967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.869094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.869113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.869359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.869376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.869622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.869640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.869850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.869867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.870044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.870065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.621 [2024-10-06 11:30:23.870311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.621 [2024-10-06 11:30:23.870328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.621 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.870490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.870507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.870694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.870708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.870821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.870833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.871116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.871129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.871393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.871404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.871573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.871585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.871699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.871711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.871897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.871909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.872086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.872098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.872275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.872287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.872457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.872469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.872642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.872654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.872840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.872852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.873013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.873025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.873156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.873168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.873279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.873291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.873453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.873465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.873625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.873637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.873742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.873754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.873951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.873963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.874123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.874136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.874323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.874334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.874436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.874449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.874707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.874719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.874839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.874851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.874968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.874980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.875159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.875171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.875342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.875354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.875481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.875493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.875675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.875689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.875853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.875865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.876024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.876036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.876158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.876170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.876295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.876307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.876486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.876497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.876613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.876625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.876721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.876732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.876906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.876918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.877081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.877093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.877206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.877217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.877346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.877358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.877527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.877539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.622 [2024-10-06 11:30:23.877633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.622 [2024-10-06 11:30:23.877643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.622 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.877808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.877820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.877936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.877948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.878071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.878083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.878195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.878207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.878438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.878450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.878649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.878661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.878838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.878850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.879025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.879037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.879235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.879248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.879415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.879426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.879540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.879552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.879743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.879756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.879923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.879934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.880040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.880051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.880183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.880195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.880361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.880372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.880498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.880510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.880625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.880637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.880817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.880829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.880940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.880952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.881076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.881088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.881270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.881283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.881386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.881398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.881515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.881527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.881714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.881725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.881798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.881808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.881926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.881939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.882112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.882131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.882305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.882318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.882503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.882514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.882687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.882698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.882878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.882891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.883070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.883081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.883263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.883274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.883481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.883494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.883621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.883633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.883822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.883834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.883994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.884006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.884220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.884231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.884438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.884449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.884626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.884638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.884826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.884839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.623 [2024-10-06 11:30:23.885038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.623 [2024-10-06 11:30:23.885050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.623 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.885154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.885166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.885289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.885301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.885579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.885590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.885700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.885711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.885825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.885837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.886028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.886040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.886218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.886230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.886335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.886348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.886559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.886571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.886743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.886755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.886955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.886967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.887151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.887163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.887276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.887287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.887493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.887505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.887605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.887617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.887740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.887752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.887874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.887886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.888117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.888130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.888223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.888233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.888457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.888468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.888572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.888584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.888754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.888765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.888975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.888987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.889108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.889122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.889298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.889310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.889473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.889485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.889695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.889706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.889834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.889846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.890010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.890023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.890153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.890165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.890260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.890271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.890452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.890464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.890574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.890586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.890792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.890804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.890977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.890989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.891120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.891133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.891200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.891211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.891450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.891462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.891635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.891647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.891765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.891777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.891951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.891963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.892129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.892141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.624 [2024-10-06 11:30:23.892255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.624 [2024-10-06 11:30:23.892267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.624 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.892455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.892467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.892658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.892669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.892914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.892926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.893157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.893169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.893284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.893296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.893539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.893550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.893672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.893684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.893887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.893900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.894089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.894101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.894355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.894366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.894492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.894504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.894759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.894770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.894949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.894960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.895057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.895077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.895192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.895203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.895447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.895459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.895636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.895647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.895769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.895781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.895950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.895962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.896213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.896226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.896406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.896420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.896684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.896696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.896879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.896891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.897018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.897030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.897228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.897241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.897421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.897432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.897571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.897583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.897689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.897701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.897879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.897890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.898073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.898085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.898189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.898199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.898377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.898389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.898584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.898595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.898769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.898781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.898901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.898912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.899153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.899165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.899327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.625 [2024-10-06 11:30:23.899339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.625 qpair failed and we were unable to recover it. 00:35:26.625 [2024-10-06 11:30:23.899459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.899471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.899595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.899607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.899744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.899756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.899876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.899888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.900082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.900094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.900258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.900270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.900448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.900460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.900654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.900666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.900779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.900791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.901075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.901088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.901368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.901390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.901587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.901604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.901794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.901812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.902005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.902022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.902218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.902236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.902412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.902429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.902540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.902557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.902729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.902746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.903002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.903019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.903197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.903214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.903415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.903431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.903699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.903717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.903918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.903935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.904116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.904139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.904299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.904317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.904464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.904478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.904739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.904751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.904928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.904940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.905125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.905138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.905230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.905240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.905363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.905374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.905561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.905572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.905742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.905755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.905958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.905969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.906219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.906232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.906413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.906425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.906610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.906623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.906813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.906826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.906940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.906952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.907132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.907144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.907359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.907370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.907670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.907682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.907870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.626 [2024-10-06 11:30:23.907882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.626 qpair failed and we were unable to recover it. 00:35:26.626 [2024-10-06 11:30:23.908065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.908077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.908264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.908276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.908464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.908476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.908666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.908678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.908799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.908811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.909064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.909077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.909241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.909252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.909379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.909391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.909525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.909536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.909648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.909660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.909848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.909860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.910029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.910041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.910250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.910261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.910385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.910397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.910533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.910545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.910680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.910691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.910868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.910880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.911072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.911085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.911188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.911200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.911384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.911396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.911528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.911542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.911656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.911668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.911793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.911805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.911994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.912005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.912117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.912129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.912266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.912278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.912445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.912457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.912661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.912674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.912837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.912848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.913012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.913025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.913128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.913139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.913237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.913249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.913457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.913469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.913649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.913661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.913773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.913785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.913902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.913914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.914041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.914053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.914222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.914234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.914350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.914363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.914582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.914593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.914706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.914718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.914884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.914896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.915069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.627 [2024-10-06 11:30:23.915081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.627 qpair failed and we were unable to recover it. 00:35:26.627 [2024-10-06 11:30:23.915192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.915204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.915380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.915392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.915630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.915641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.915823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.915835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.915943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.915954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.916144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.916156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.916269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.916280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.916482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.916493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.916660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.916672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.916905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.916917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.917103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.917115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.917375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.917386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.917457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.917468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.917589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.917602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.917731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.917744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.917897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.917909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.918116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.918129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.918311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.918327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.918490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.918502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.918616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.918628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.918805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.918816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.919002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.919014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.919252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.919264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.919455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.919467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.919642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.919654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.919831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.919843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.919968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.919980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.920169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.920182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.920345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.920357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.920457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.920470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.920582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.920593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.920695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.920706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.920882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.920894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.921067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.921080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.921180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.921191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.921309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.921321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.921509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.921521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.921752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.921764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.921976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.921988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.922158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.922170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.922405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.922417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.922542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.922554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.922671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.628 [2024-10-06 11:30:23.922684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.628 qpair failed and we were unable to recover it. 00:35:26.628 [2024-10-06 11:30:23.922847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.922859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.923030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.923042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.923152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.923164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.923341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.923354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.923601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.923613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.923843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.923855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.923969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.923981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.924053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.924067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.924243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.924255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.924425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.924437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.924623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.924635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.924867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.924879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.925053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.925069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.925303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.925315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.925415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.925429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.925606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.925618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.925722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.925734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.925907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.925919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.926052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.926069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.926343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.926355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.926469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.926481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.926601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.926613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.926725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.926737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.926921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.926933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.927168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.927188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.927314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.927327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.927439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.927451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.927706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.927718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.927953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.927964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.928220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.928232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.928396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.928408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.928571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.928583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.928747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.928759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.928910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.928922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.929152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.929165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.929426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.929438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.929612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.929624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.929739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.929751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.629 [2024-10-06 11:30:23.929935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.629 [2024-10-06 11:30:23.929947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.629 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.930152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.930164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.930310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.930322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.930489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.930501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.930776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.930787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.930961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.930973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.931205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.931217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.931413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.931425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.931588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.931599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.931771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.931784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.931890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.931903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.932022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.932034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.932211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.932223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.932350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.932363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.932480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.932492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.932609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.932621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.932723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.932735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.932868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.932879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.933047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.933061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.933170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.933181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.933354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.933366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.933534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.933545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.933752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.933764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.933889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.933901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.934078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.934090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.934213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.934225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.934406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.934418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.934543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.934555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.934748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.934760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.934943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.934955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.935069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.935081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.935267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.935279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.935405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.935417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.935515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.935527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.935627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.935639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.935818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.935829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.936027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.936039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.936132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.936143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.936304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.936316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.936487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.936499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.936629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.936641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.936833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.936844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.936964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.630 [2024-10-06 11:30:23.936976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.630 qpair failed and we were unable to recover it. 00:35:26.630 [2024-10-06 11:30:23.937144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.937156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.937332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.937345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.937538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.937549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.937671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.937682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.937805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.937817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.938053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.938077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.938198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.938210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.938328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.938340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.938440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.938453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.938615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.938627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.938812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.938823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.938935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.938947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.939110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.939122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.939224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.939236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.939352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.939364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.939533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.939544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.939716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.939728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.939846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.939858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.940022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.940034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.940229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.940241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.940420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.940432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.940596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.940608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.940736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.940747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.940924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.940936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.941049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.941066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.941297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.941309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.941541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.941553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.941724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.941736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.941995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.942006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.942107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.942118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.942222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.942233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.942333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.942343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.942481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.942493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.942605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.942616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.942817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.942829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.943008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.943020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.943120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.943132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.943306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.943318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.943444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.943456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.943624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.943636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.943743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.943756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.943864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.943875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.944061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.631 [2024-10-06 11:30:23.944073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.631 qpair failed and we were unable to recover it. 00:35:26.631 [2024-10-06 11:30:23.944190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.944203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.944318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.944330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.944446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.944458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.944574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.944585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.944705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.944716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.944838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.944850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.944967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.944978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.945221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.945234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.945439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.945452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.945577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.945588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.945717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.945730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.945846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.945859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.945979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.945991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.946174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.946188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.946399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.946410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.946511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.946523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.946697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.946710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.946836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.946847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.946959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.946970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.947090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.947103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.947205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.947217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.947404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.947416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.947527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.947539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.947721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.947733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.947915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.947927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.948034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.948047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.948180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.948192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.948373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.948386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.948507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.948518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.948750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.948762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.948874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.948888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.948991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.949003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.949184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.949197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.949296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.949308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.949432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.949444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.949609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.949621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.949724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.949736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.949863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.949877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.949974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.949986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.950200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.950212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.950448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.950460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.950572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.950584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.950753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.632 [2024-10-06 11:30:23.950765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.632 qpair failed and we were unable to recover it. 00:35:26.632 [2024-10-06 11:30:23.950896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.950908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.951021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.951032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.951135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.951148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.951328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.951341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.951507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.951519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.951622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.951633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.951864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.951876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.952049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.952064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.952302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.952314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.952400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.952411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.952587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.952599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.952829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.952841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.953020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.953031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.953212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.953224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.953398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.953410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.953616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.953628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.953881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.953893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.954015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.954026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.954160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.954172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.954349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.954360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.954529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.954541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.954639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.954652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.954828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.954839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.955003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.955016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.955102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.955115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.955282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.955294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.955416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.955428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.955528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.955540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.955787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.955798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.955990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.956002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.956181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.956193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.956421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.956432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.956638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.956650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.956828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.956840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.956970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.956984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.957246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.957258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.957423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.957434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.633 [2024-10-06 11:30:23.957560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.633 [2024-10-06 11:30:23.957572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.633 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.957677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.957688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.957813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.957825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.957992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.958004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.958128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.958140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.958330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.958341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.958467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.958479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.958761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.958773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.958891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.958903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.959085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.959097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.959267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.959278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.959406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.959418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.959528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.959540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.959712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.959724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.959837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.959850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.959958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.959971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.960165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.960178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.960270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.960281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.960475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.960488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.960680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.960692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.960810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.960822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.960939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.960951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.961047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.961069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.961191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.961202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.961408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.961420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.961636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.961648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.961746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.961759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.961934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.961945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.962044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.962057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.962234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.962246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.962350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.962362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.962599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.962611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.962804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.962816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.963057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.963072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.963191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.963203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.963383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.963395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.963517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.963528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.963761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.963776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.963959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.963972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.964143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.964156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.964338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.964350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.964457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.964470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.964597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.964609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.634 [2024-10-06 11:30:23.964810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.634 [2024-10-06 11:30:23.964821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.634 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.964989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.965001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.965130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.965142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.965258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.965270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.965368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.965379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.965501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.965514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.965632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.965643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.965823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.965836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.965937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.965948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.966070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.966082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.966191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.966203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.966304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.966316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.966422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.966434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.966596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.966608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.966729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.966741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.966958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.966970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.967095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.967106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.967298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.967309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.967436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.967449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.967545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.967557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.967744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.967755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.967975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.967988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.968120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.968132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.968245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.968259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.968448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.968460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.968646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.968657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.968834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.968847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.969026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.969038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.969156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.969169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.969281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.969293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.969397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.969409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.969524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.969536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.969627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.969638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.969738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.969749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.969849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.969863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.970037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.970050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.970234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.970246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.970374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.970386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.970503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.970515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.970626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.970638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.970750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.970761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.970930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.970942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.971063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.971075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.635 qpair failed and we were unable to recover it. 00:35:26.635 [2024-10-06 11:30:23.971193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.635 [2024-10-06 11:30:23.971204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.971376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.971388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.971505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.971516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.971681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.971693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.971870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.971883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.971985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.971997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.972173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.972195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.972298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.972310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.972421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.972433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.972554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.972566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.972727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.972740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.972982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.972994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.973107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.973119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.973221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.973232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.973414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.973425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.973539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.973551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.973660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.973672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.973847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.973860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.973960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.973972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.974142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.974154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.974264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.974276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.974454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.974466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.974652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.974665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.974843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.974854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.974970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.974982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.975097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.975109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.975215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.975227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.975395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.975406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.975575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.975585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.975685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.975695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.975810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.975822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.975913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.975926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.976159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.976171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.976358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.976370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.976613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.976625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.976814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.976826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.976941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.976952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.977084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.977097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.977195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.977207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.977321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.977332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.977470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.977481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.977579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.977591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.977761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.977773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.636 qpair failed and we were unable to recover it. 00:35:26.636 [2024-10-06 11:30:23.977942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.636 [2024-10-06 11:30:23.977953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.978120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.978132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.978396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.978407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.978588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.978599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.978711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.978723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.978859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.978871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.978956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.978967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.979106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.979117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.979379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.979390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.979493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.979505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.979691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.979703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.979803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.979815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.980000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.980012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.980119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.980131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.980229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.980241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.980356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.980368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.980489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.980500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.980673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.980685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.980861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.980872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.980955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.980966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.981128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.981140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.981256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.981268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.981380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.981392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.981538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.981550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.981658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.981670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.981763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.981775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.981952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.981963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.982132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.982144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.982266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.982280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.982475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.982488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.982745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.982757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.982940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.982952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.983085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.983097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.983214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.983226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.983482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.983494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.983659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.983671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.983786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.983799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.983912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.983924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.984008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.637 [2024-10-06 11:30:23.984020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.637 qpair failed and we were unable to recover it. 00:35:26.637 [2024-10-06 11:30:23.984214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.984226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.984376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.984387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.984575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.984587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.984707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.984719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.984819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.984829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.985022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.985034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.985271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.985283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.985453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.985464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.985573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.985585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.985828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.985840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.985950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.985962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.986126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.986138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.986257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.986269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.986368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.986379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.986497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.986509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.986700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.986713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.986901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.986913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.987005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.987017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.987117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.987129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.987297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.987309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.987565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.987576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.987693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.987706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.987873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.987885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.987971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.987982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.988174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.988186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.988307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.988318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.988436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.988448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.988622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.988633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.988751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.988763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.988878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.988892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.989087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.989100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.989301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.989314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.989408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.989418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.989528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.989540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.989700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.989712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.989891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.989902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.990014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.990026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.990286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.990298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.990535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.990547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.990671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.990683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.990869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.990881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.991093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.991106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.991218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.638 [2024-10-06 11:30:23.991229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.638 qpair failed and we were unable to recover it. 00:35:26.638 [2024-10-06 11:30:23.991477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.991488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.991658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.991670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.991857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.991868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.992052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.992069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.992186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.992198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.992401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.992413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.992511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.992523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.992636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.992648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.992814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.992825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.992988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.993000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.993181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.993193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.993356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.993368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.993463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.993476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.993671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.993683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.993849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.993861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.994068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.994081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.994192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.994204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.994381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.994393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.994527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.994539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.994732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.994744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.994858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.994871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.994986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.994997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.995116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.995128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.995296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.995307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.995404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.995414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.995516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.995527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.995785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.995799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.995897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.995909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.996040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.996051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.996250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.996262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.996454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.996466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.996698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.996710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.996841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.996854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.997019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.997031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.997137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.997150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.997317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.997329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.997443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.997454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.997634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.997645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.997744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.997755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.997920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.997932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.998175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.998187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.998373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.998384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.998568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.639 [2024-10-06 11:30:23.998580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.639 qpair failed and we were unable to recover it. 00:35:26.639 [2024-10-06 11:30:23.998693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:23.998705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:23.998935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:23.998946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:23.999065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:23.999077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:23.999324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:23.999336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:23.999452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:23.999463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:23.999643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:23.999655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:23.999818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:23.999829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.000093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.000106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.000225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.000236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.000493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.000505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.000632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.000644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.000766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.000779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.000933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.000945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.001116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.001128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.001292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.001304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.001471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.001482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.001658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.001670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.001776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.001788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.001897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.001909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.002034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.002046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.002213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.002225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.002346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.002358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.002529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.002540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.002636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.002649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.002839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.002851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.003045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.003057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.003200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.003212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.003469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.003481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.003616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.003628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.003812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.003824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.003936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.003947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.004130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.004142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.004378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.004389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.004497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.004509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.004709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.004720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.004824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.004837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.005048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.005068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.005183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.005195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.005378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.005389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.005625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.005637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.005742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.005754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.005989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.006001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.006121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.640 [2024-10-06 11:30:24.006133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.640 qpair failed and we were unable to recover it. 00:35:26.640 [2024-10-06 11:30:24.006267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.006279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.006440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.006452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.006624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.006636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.006742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.006754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.006928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.006940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.007039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.007051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.007164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.007176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.007404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.007425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.007632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.007670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.007914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.007939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.008158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.008172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.008286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.008299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.008436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.008448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.008652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.008663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.008909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.008920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.009095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.009107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.009289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.009302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.009513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.009525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.009642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.009654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.009782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.009794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.009959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.009974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.010082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.010094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.010271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.010282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.010398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.010410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.010516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.010528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.010621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.010633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.010840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.010852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.011020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.011032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.011138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.011150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.011284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.011296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.011537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.011549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.011794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.011806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.011936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.011948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.012076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.012088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.012289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.012301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.012409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.012421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.012503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.012515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.641 [2024-10-06 11:30:24.012776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.641 [2024-10-06 11:30:24.012788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.641 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.012946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.012957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.013074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.013087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.013276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.013287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.013464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.013476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.013623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.013641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.013819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.013831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.014087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.014099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.014277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.014288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.014472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.014484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.014610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.014622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.014883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.014894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.015062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.015074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.015186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.015198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.015398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.015410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.015566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.015577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.015762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.015774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.015955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.015967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.016123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.016135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.016243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.016255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.016489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.016501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.016675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.016687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.016917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.016929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.017099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.017113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.017284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.017296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.017419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.017430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.017607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.017619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.017799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.017811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.017934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.017947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.018196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.018209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.018379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.018391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.018461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.018472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.018647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.018660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.018824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.018836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.019021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.019033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.019189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.019201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.019330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.019342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.019549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.019561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.019682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.019694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.019825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.019837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.020007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.020019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.020212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.020225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.020421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.020432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.642 [2024-10-06 11:30:24.020596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.642 [2024-10-06 11:30:24.020608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.642 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.020792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.020804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.020907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.020918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.021034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.021045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.021229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.021241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.021356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.021368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.021495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.021507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.021580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.021593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.021700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.021711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.021879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.021891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.022131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.022144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.022252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.022264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.022439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.022451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.022580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.022592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.022845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.022857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.023022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.023033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.023118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.023129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.023363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.023374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.023457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.023468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.023729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.023741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.023979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.023991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.024136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.024148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.024332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.024343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.024522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.024534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.024774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.024785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.024962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.024973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.025173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.025184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.025296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.025308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.025566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.025578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.025813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.025824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.026079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.026091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.026277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.026289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.026470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.026482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.026673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.026685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.026784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.026796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.026922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.026934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.027053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.027068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.027249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.027261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.027360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.027372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.027548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.027560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.027795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.027806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.027974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.027986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.028162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.028174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.643 [2024-10-06 11:30:24.028290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.643 [2024-10-06 11:30:24.028302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.643 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.028420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.028430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.028556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.028568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.028746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.028758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.028951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.028965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.029071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.029082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.029247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.029259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.029387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.029399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.029574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.029586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.029757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.029769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.029870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.029882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.030084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.030096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.030345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.030357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.030545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.030557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.030719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.030730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.030848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.030859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.031022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.031034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.031221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.031233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.031360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.031370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.031492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.031503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.031620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.031631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.031813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.031825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.032001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.032013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.032110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.032121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.032299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.032311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.032424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.032436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.032621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.032634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.032735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.032747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.032946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.032957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.033159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.033170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.033260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.033270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.033475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.033488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.033582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.033593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.033723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.033735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.033859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.033871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.034035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.034047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.034285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.034297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.034462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.034474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.034655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.034667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.034795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.034807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.035018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.035029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.035265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.035277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.644 [2024-10-06 11:30:24.035474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.644 [2024-10-06 11:30:24.035486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.644 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.035722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.035733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.035968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.035981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.036092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.036104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.036271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.036283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.036406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.036418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.036532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.036543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.036712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.036723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.036893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.036905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.037042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.037054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.037186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.037198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.037295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.037308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.037561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.037573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.037690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.037702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.037832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.037844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.038013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.038025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.038263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.038275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.038384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.038397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.038566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.038578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.038748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.038760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.038939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.038951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.039052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.039066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.039192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.039203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.039319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.039331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.039452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.039464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.039634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.039645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.039753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.039766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.039960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.039972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.040090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.040102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.040282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.040294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.040410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.040422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.040566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.040578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.040708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.040720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.040895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.040907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.041023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.041034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.041207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.041220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.041336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.041348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.041578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.041590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.645 qpair failed and we were unable to recover it. 00:35:26.645 [2024-10-06 11:30:24.041756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.645 [2024-10-06 11:30:24.041768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.041896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.041908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.042031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.042043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.042180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.042192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.042370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.042384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.042549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.042561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.042671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.042683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.042846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.042858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.043031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.043043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.043305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.043317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.043483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.043495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.043728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.043740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.043973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.043985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.044149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.044162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.044400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.044411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.044620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.044631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.044818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.044830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.045037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.045049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.045301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.045313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.045490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.045502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.045634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.045646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.045760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.045771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.045952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.045964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.046070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.046081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.046212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.046223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.046454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.046466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.046582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.046594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.046712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.046724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.046901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.046913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.047085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.047098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.047192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.047202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.047316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.047328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.047575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.047587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.047825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.047838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.048023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.048035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.048220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.048232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.048468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.048480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.048576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.048586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.048765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.048777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.048943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.048955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.049131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.049145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.049268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.049280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.049401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.049413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.646 qpair failed and we were unable to recover it. 00:35:26.646 [2024-10-06 11:30:24.049616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.646 [2024-10-06 11:30:24.049628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.049738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.049752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.049929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.049941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.050026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.050037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.050156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.050168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.050260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.050272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.050459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.050471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.050622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.050633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.050768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.050779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.050957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.050969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.051172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.051184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.051327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.051339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.051468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.051480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.051666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.051678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.051790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.051802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.051990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.052002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.052252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.052264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.052405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.052417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.052672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.052683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.052816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.052828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.053005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.053017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.053147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.053160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.053318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.053329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.053524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.053536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.053637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.053648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.053760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.053770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.053903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.053915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.054034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.054047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.054256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.054268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.054448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.054459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.054580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.054592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.054873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.054885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.055121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.055133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.055331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.055343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.055476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.055488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.055644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.055656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.055836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.055847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.056009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.056021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.056203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.056216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.056415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.056427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.056685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.056697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.056922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.056936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.057106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.057118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.647 [2024-10-06 11:30:24.057302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.647 [2024-10-06 11:30:24.057314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.647 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.057496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.057508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.057693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.057704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.057989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.058001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.058179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.058192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.058359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.058371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.058581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.058593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.058770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.058782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.058951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.058963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.059086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.059099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.059272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.059283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.059382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.059394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.059578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.059589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.059705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.059718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.059911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.059922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.060020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.060032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.060150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.060162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.060288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.060300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.060410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.060422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.060595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.060607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.060866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.060878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.061035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.061047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.061184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.061196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.061377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.061389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.061572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.061584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.061712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.061723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.061900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.061912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.062020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.062032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.062145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.062157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.062268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.062280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.062362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.062372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.062603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.062614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.062850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.062861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.062977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.062989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.063100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.063112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.063230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.063243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.063423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.063435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.063554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.063566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.063797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.063810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.063986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.063998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.064215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.064227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.064330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.064342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.064508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.064520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.648 [2024-10-06 11:30:24.064782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.648 [2024-10-06 11:30:24.064794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.648 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.064919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.064930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.065105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.065117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.065220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.065230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.065402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.065414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.065592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.065603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.065715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.065725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.065904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.065916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.066012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.066024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.066210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.066222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.066366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.066377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.066620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.066632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.066801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.066812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.067015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.067027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.067211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.067223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.067425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.067437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.067601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.067612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.067789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.067802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.067993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.068005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.068122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.068135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.068349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.068361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.068633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.068645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.068884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.068896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.069081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.069099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.069209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.069221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.069382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.069394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.069576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.069588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.069712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.069724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.069909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.069921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.070083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.070095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.070280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.070292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.070460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.070472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.070577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.070588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.070707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.070719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.070827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.070839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.071010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.071024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.071190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.071202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.071313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.071325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.071514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.649 [2024-10-06 11:30:24.071526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.649 qpair failed and we were unable to recover it. 00:35:26.649 [2024-10-06 11:30:24.071607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.071618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.071689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.071699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.071876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.071888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.072018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.072030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.072162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.072174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.072301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.072313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.072480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.072492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.072665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.072677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.072853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.072864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.073045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.073056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.073166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.073178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.073276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.073288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.073387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.073398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.073631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.073643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.073848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.073860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.073967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.073979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.074095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.074107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.074337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.074349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.074528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.074540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.074745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.074756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.074877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.074889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.075165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.075177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.075273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.075284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.075517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.075529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.075714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.075726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.075824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.075835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.076016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.076028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.076217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.076237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.076385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.076397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.076626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.076638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.076811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.076822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.076997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.077010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.077118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.077132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.077294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.077306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.077404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.077414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.077579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.077591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.077710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.077724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.077983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.077995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.078113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.078125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.078254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.078266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.078434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.078446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.078631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.078643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.078758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.650 [2024-10-06 11:30:24.078769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.650 qpair failed and we were unable to recover it. 00:35:26.650 [2024-10-06 11:30:24.078946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.078958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.079142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.079155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.079341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.079353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.079493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.079505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.079636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.079648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.079791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.079803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.079905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.079916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.080179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.080191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.080342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.080354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.080606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.080618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.080744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.080756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.080883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.080894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.080966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.080977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.081149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.081161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.081353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.081365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.081492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.081503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.081694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.081706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.081883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.081895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.082053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.082069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.082198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.082211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.082443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.082455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.082542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.082552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.082736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.082748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.082948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.082959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.083083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.083095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.083329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.083341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.083575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.083587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.083717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.083729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.083897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.083908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.084020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.084032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.084218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.084230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.084398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.084410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.084527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.084538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.084620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.084632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.084754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.084766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.085006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.085017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.085206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.085218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.085338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.085350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.085621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.085633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.085761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.085774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.086020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.086032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.086171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.086182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.086368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.651 [2024-10-06 11:30:24.086380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.651 qpair failed and we were unable to recover it. 00:35:26.651 [2024-10-06 11:30:24.086575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.086587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.086767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.086778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.086955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.086966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.087092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.087105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.087307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.087319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.087550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.087562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.087743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.087755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.087964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.087976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.088235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.088247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.088428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.088440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.088618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.088630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.088741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.088752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.088853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.088865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.089046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.089062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.089242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.089253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.089370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.089382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.089554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.089566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.089710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.089733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.089915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.089933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.090068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.090087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.090249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.090267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.090521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.090538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.090813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.090830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.091045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.091062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.091298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.091310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.091514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.091526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.091792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.091804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.091943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.091955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.092163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.092175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.092411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.092423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.092530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.092544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.092653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.092664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.092767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.092778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.092981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.092994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.093075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.093086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.093210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.093222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.093401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.093413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.093528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.093539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.093717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.093729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.093914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.093926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.094101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.094114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.094220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.094232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.094432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.652 [2024-10-06 11:30:24.094444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.652 qpair failed and we were unable to recover it. 00:35:26.652 [2024-10-06 11:30:24.094614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.094627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.094866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.094878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.095123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.095136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.095225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.095236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.095334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.095346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.095543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.095555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.095679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.095690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.095859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.095871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.095955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.095966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.096193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.096205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.096337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.096349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.096520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.096532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.096770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.096782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.096953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.096964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.097141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.097153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.097320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.097331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.097499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.097511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.097653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.097665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.097839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.097851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.097958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.097969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.098077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.098088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.098218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.098230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.098397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.098409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.098574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.098586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.098781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.098793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.098990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.099002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.099116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.099128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.099243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.099256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.099429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.099441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.099640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.099652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.099818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.099831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.099959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.099971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.100222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.100234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.100346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.100358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.100483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.100495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.100561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.100572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.100759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.100771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.100970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.100982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.653 [2024-10-06 11:30:24.101092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.653 [2024-10-06 11:30:24.101105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.653 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.101218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.101229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.101458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.101470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.101640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.101652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.101814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.101826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.102067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.102079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.102180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.102191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.102352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.102364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.102541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.102553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.102668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.102680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.102803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.102815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.102989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.103001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.103144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.103157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.103339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.103351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.103454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.103466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.103581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.103593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.103854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.103866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.103948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.103959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.104086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.104098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.104310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.104321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.104497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.104509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.104676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.104688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.104778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.104789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.104905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.104916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.105129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.105141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.105308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.105320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.105497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.105509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.105626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.105639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.105870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.105883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.106064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.106078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.106254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.106266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.106464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.106476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.106707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.106719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.106955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.106967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.107148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.107160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.107367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.107379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.107556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.107567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.107640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.107650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.107773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.107785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.107903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.107914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.108154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.108166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.108359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.108371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.108481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.654 [2024-10-06 11:30:24.108492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.654 qpair failed and we were unable to recover it. 00:35:26.654 [2024-10-06 11:30:24.108674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.108686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.108866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.108878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.109083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.109095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.109354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.109366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.109488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.109500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.109644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.109656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.109818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.109831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.109963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.109975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.110179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.110192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.110390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.110402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.110582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.110594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.110713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.110725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.110892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.110904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.111134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.111148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.111333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.111345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.111539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.111550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.111753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.111764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.111946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.111958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.112071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.112083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.112179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.112191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.112396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.112407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.112519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.112531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.112688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.112700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.112806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.112818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.113052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.113069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.113153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.113164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.113302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.113314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.113487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.113499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.113621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.113632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.113748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.113760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.113992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.114004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.114178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.114189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.114332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.114344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.114516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.114528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.114653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.114665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.114774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.114785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.114888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.114900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.115074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.115086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.115267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.115279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.115471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.115483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.115654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.115666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.115766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.115777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.115894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.655 [2024-10-06 11:30:24.115906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.655 qpair failed and we were unable to recover it. 00:35:26.655 [2024-10-06 11:30:24.116079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.116091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.116191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.116202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.116317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.116329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.116560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.116572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.116759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.116771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.116952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.116964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.117133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.117145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.117310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.117322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.117506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.117519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.117647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.117658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.117784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.117798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.117961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.117973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.118094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.118106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.118218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.118229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.118436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.118449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.118619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.118630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.118741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.118753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.119007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.119018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.119123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.119136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.119309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.119322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.119446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.119458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.119621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.119633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.119763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.119774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.119943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.119954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.120132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.120145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.120335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.120346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.120471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.120483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.120639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.120650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.120830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.120842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.120953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.120965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.121071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.121084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.121253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.121265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.121482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.121493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.121681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.121693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.121820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.121832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.122066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.122078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.122174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.122185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.122406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.122417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.122548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.122559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.122750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.122762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.122927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.122939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.123106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.123118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.123309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.656 [2024-10-06 11:30:24.123320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.656 qpair failed and we were unable to recover it. 00:35:26.656 [2024-10-06 11:30:24.123464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.123476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.123611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.123623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.123876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.123888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.124030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.124042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.124308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.124320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.124481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.124493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.124638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.124650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.124899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.124915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.125042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.125053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.125254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.125266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.125498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.125511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.125674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.125686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.125822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.125834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.126000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.126012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.126090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.126101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.126182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.126192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.126355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.126366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.126551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.126563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.126656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.126667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.126793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.126804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.126979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.126990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.127193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.127205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.127388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.127400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.127633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.127645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.127887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.127898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.128041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.128053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.128292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.128304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.128574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.128585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.128770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.128783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.128894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.128905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.129072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.129084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.129202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.129214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.129394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.129406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.129528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.129539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.129786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.129798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.129969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.129980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.130157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.130169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.130297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.657 [2024-10-06 11:30:24.130308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.657 qpair failed and we were unable to recover it. 00:35:26.657 [2024-10-06 11:30:24.130546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.130558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.130722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.130734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.130856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.130868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.131033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.131045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.131223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.131235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.131417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.131429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.131527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.131537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.131615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.131626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.131742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.131754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.131894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.131908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.132005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.132018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.132183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.132195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.132429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.132441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.132547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.132558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.132666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.132676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.132792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.132804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.132982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.132994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.133179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.133190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.133450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.133462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.133654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.133666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.133848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.133860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.134030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.134043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.134159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.134171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.134402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.134414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.658 [2024-10-06 11:30:24.134578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.658 [2024-10-06 11:30:24.134590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.658 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.134821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.134832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.135002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.135014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.135189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.135201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.135464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.135475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.135654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.135667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.135806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.135817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.136012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.136025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.136205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.136218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.136408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.136420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.136626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.136637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.136802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.136815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.137004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.137016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.137299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.137311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.137481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.137492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.137683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.137695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.137900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.137911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.138098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.138111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.138297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.138309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.138494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.138505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.138670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.138682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.138915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.138926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.139038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.139049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.139286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.139298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.139474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.139486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.139665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.139678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.139910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.139922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.140104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.140117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.140245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.140257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.140488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.140500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.140628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.140640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.140896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.140908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.141094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.141107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.141239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.141251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.141372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.141384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.141486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.141497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.141673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.141685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.141900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.141913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.142078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.142090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.142222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.142234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.142396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.142408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.142587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.142598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.142772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.142784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.142898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.142910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.143028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.143040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.659 qpair failed and we were unable to recover it. 00:35:26.659 [2024-10-06 11:30:24.143158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.659 [2024-10-06 11:30:24.143170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.143301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.143313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.143431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.143442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.143642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.143653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.143774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.143786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.143964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.143977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.144114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.144126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.144232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.144243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.144409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.144421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.144602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.144614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.144864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.144876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.145000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.145012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.145264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.145276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.145462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.145474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.145592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.145603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.145748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.145760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.145957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.145969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.146083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.146095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.146194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.146205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.146303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.146315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.146452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.146467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.146634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.146646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.146789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.146801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.147044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.147057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.147239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.147251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.147430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.147442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.147572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.147583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.147749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.147761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.147944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.147955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.148067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.148079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.148259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.148271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.148466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.148478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.148737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.148748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.148979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.148991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.149183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.149195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.149379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.149391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.149575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.149587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.149776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.149788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.149901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.149913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.150146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.150158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.150319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.150330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.150581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.150594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.150680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.150691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.150804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.150816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.660 qpair failed and we were unable to recover it. 00:35:26.660 [2024-10-06 11:30:24.151010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.660 [2024-10-06 11:30:24.151022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.151129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.151140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.151254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.151266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.151431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.151443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.151559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.151571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.151731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.151743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.151926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.151938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.152171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.152183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.152300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.152312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.152580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.152592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.152768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.152780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.152891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.152903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.153091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.153103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.153316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.153328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.153578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.153590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.153761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.153773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.153897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.153912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.154009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.154021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.154282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.154295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.154470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.154482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.154610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.154621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.154798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.154809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.154976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.154987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.155128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.155141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.155254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.155266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.155442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.155454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.155569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.155581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.155765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.155777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.155877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.155888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.156075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.156088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.156214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.156225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.156459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.156472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.156591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.156603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.156766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.156778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.156902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.156914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.157077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.157090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.157257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.157269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.157371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.157383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.157521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.157533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.661 [2024-10-06 11:30:24.157649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.661 [2024-10-06 11:30:24.157661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.661 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.157786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.157798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.157880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.157890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.158078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.158091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.158277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.158289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.158472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.158484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.158591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.158603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.158868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.158879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.159005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.159017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.159144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.159156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.159318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.159330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.159428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.159439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.159537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.159549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.159783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.159795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.159965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.159977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.160146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.160159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.160271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.160282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.160482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.160497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.160687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.160699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.160945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.160957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.161086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.161098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.161241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.161253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.161420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.161432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.161599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.161610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.161835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.161846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.162054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.162071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.162301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.162313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.162422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.162432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.162614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.162626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.162820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.162832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.162963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.162975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.163153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.163165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.163411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.163423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.163607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.163619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.163730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.163742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.163858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.163869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.662 [2024-10-06 11:30:24.164151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.662 [2024-10-06 11:30:24.164164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.662 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.164333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.164346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.164535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.164547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.164658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.164671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.164783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.164795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.165088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.165101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.165269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.165281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.165389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.165401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.165567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.165579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.165772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.165784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.166003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.166015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.166131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.166144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.166329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.166341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.166452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.166464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.166578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.166591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.166849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.166860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.167021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.167033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.167221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.167233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.167361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.167373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.167510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.167522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.167704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.167716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.167837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.167850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.168033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.168045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.168154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.168166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.168339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.168351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.168433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.168444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.168677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.168688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.168861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.168873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.169062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.169074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.949 [2024-10-06 11:30:24.169250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.949 [2024-10-06 11:30:24.169262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.949 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.169440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.169452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.169626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.169638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.169821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.169833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.170004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.170016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.170129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.170142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.170402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.170413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.170542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.170554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.170743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.170754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.170926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.170939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.171123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.171135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.171233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.171245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.171406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.171417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.171578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.171590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.171767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.171779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.171968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.171980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.172143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.172156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.172413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.172424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.172660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.172672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.172797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.172809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.172931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.172943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.173217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.173229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.173470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.173482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.173652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.173664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.173840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.173851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.174045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.174057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.174294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.174307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.174410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.174421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.174547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.174559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.174817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.174829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.175023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.175035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.175194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.175207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.175318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.175331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.175518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.175530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.175652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.175665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.175837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.175849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.176035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.950 [2024-10-06 11:30:24.176046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.950 qpair failed and we were unable to recover it. 00:35:26.950 [2024-10-06 11:30:24.176319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.176332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.176477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.176489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.176594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.176605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.176716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.176730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.176914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.176925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.177166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.177177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.177376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.177388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.177499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.177509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.177610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.177621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.177816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.177828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.177999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.178012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.178127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.178138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.178393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.178405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.178486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.178496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.178673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.178685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.178815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.178828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.178956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.178969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.179132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.179145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.179323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.179335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.179450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.179461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.179653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.179664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.179838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.179851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.180092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.180115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.180258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.180275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.180464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.180480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.180559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.180577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.180700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.180716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.180840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.180856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.181037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.181049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.181181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.181194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.181375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.181386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.181619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.181629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.181737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.181748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.181977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.181989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.182157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.182169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.182337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.182351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.951 qpair failed and we were unable to recover it. 00:35:26.951 [2024-10-06 11:30:24.182472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.951 [2024-10-06 11:30:24.182485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.182759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.182770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.182946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.182958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.183191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.183204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.183321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.183333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.183534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.183546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.183668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.183680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.183852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.183864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.183988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.184001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.184121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.184133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.184298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.184310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.184413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.184424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.184527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.184537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.184709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.184721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.184900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.184912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.185078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.185090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.185268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.185280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.185517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.185529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.185709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.185722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.185888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.185900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.186025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.186036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.186206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.186218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.186453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.186465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.186695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.186708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.186816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.186827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.186942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.186953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.187172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.187197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.187313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.187330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.187455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.187473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.187646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.187663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.187793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.187812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.188073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.188092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.188267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.188281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.188388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.188399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.188578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.188590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.188707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.188719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.188881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.952 [2024-10-06 11:30:24.188892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.952 qpair failed and we were unable to recover it. 00:35:26.952 [2024-10-06 11:30:24.189127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.189139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.189256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.189268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.189429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.189441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.189546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.189557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.189677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.189689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.189858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.189869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.189967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.189980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.190103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.190116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.190214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.190226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.190496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.190508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.190681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.190693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.190843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.190855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.191023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.191035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.191131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.191142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.191312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.191324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.191555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.191567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.191732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.191744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.191869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.191881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.191979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.191990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.192161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.192174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.192351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.192362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.192487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.192499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.192706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.192718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.192882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.192895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.193016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.193028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.193146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.193158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.193367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.193379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.193563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.193575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.193762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.193774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.193877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.193891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.194078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.953 [2024-10-06 11:30:24.194091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.953 qpair failed and we were unable to recover it. 00:35:26.953 [2024-10-06 11:30:24.194269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.194281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.194397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.194410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.194512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.194524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.194684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.194696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.194867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.194878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.194977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.194990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.195124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.195137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.195367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.195379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.195545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.195557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.195733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.195745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.195852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.195864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.195989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.196001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.196117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.196129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.196247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.196259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.196353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.196365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.196493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.196504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.196625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.196636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.196868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.196880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.196994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.197006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.197218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.197231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.197348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.197360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.197589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.197601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.197724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.197736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.197924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.197936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.198068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.198080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.198188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.198200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.198312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.198324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.198489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.198500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.198571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.198581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.198681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.198693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.198873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.198885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.198993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.199005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.199192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.199204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.199303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.199315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.199487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.199499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.199682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.954 [2024-10-06 11:30:24.199694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.954 qpair failed and we were unable to recover it. 00:35:26.954 [2024-10-06 11:30:24.199829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.199841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.200075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.200088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.200249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.200263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.200428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.200440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.200620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.200632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.200813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.200825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.201023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.201035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.201149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.201161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.201417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.201429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.201565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.201577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.201709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.201721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.201844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.201855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.202024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.202035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.202206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.202219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.202333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.202345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.202448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.202460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.202589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.202600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.202713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.202724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.202844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.202857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.203090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.203103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.203236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.203247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.203425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.203437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.203540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.203552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.203675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.203687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.203785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.203797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.203901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.203912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.204098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.204110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.204258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.204271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.204429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.204441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.204612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.204623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.204803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.204816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.204935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.204947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.205056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.205078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.205255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.205267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.205444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.205457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.205594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.205607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.205807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.955 [2024-10-06 11:30:24.205819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.955 qpair failed and we were unable to recover it. 00:35:26.955 [2024-10-06 11:30:24.205928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.205941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.206047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.206063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.206178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.206190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.206288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.206301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.206427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.206440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.206553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.206568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.206667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.206679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.206768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.206779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.206880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.206891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.207019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.207031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.207144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.207157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.207270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.207281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.207382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.207395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.207503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.207514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.207611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.207623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.207833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.207844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.207945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.207957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.208124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.208136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.208254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.208274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.208447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.208460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.208618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.208630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.208700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.208711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.208796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.208809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.208921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.208933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.209028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.209039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.209208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.209220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.209315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.209328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.209437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.209450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.209629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.209641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.209810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.209822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.209998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.210010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.210117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.210129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.210237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.210248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.210354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.210365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.210531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.210543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.210677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.210689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.956 [2024-10-06 11:30:24.210818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.956 [2024-10-06 11:30:24.210830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.956 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.210944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.210956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.211124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.211136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.211306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.211317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.211423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.211435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.211549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.211561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.211672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.211683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.211861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.211872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.212108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.212120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.212249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.212262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.212425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.212436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.212613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.212626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.212735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.212747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.212854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.212866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.212973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.212985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.213104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.213116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.213228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.213241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.213474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.213486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.213603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.213615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.213721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.213732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.213836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.213848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.214014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.214026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.214212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.214224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.214320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.214332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.214424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.214435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.214534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.214546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.214724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.214738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.214851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.214864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.214949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.214960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.215077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.215087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.215191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.215203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.215286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.215299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.215464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.215477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.215646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.215658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.215777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.215789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.215960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.215974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.216097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.216110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.957 qpair failed and we were unable to recover it. 00:35:26.957 [2024-10-06 11:30:24.216297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.957 [2024-10-06 11:30:24.216309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.216406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.216418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.216523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.216535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.216740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.216751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.216917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.216929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.217033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.217045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.217233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.217244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.217359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.217371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.217487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.217499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.217683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.217695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.217795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.217806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.217928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.217940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.218109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.218125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.218238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.218250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.218357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.218370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.218471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.218482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.218600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.218612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.218785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.218798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.218964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.218975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.219087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.219099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.219196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.219208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.219303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.219315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.219488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.219500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.219665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.219677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.219852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.219864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.220030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.220042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.220152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.220164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.220330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.220342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.220456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.220468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.220636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.220647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.220779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.220791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.220899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.220910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.958 [2024-10-06 11:30:24.221146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.958 [2024-10-06 11:30:24.221159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.958 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.221328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.221340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.221457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.221470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.221565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.221576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.221735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.221759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.221871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.221883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.221996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.222007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.222135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.222155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.222341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.222358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.222476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.222493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.222677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.222695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.222820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.222837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.222945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.222962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.223074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.223087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.223205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.223218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.223373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.223385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.223599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.223611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.223715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.223726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.223853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.223865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.224033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.224045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.224179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.224194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.224361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.224373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.224489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.224501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.224673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.224685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.224798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.224810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.224978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.224991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.225181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.225194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.225295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.225307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.225489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.225501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.225676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.225687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.225799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.225811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.225999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.226011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.226116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.226128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.226282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.226294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.226465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.226477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.226606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.226618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.226830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.226841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.959 [2024-10-06 11:30:24.226967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.959 [2024-10-06 11:30:24.226979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.959 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.227084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.227096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.227214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.227226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.227355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.227367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.227601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.227612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.227724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.227736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.227849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.227861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.228044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.228055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.228178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.228190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.228355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.228366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.228487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.228499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.228595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.228607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.228858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.228870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.229043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.229054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.229157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.229170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.229269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.229281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.229403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.229415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.229583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.229595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.229760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.229772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.229889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.229902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.229996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.230008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.230196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.230209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.230329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.230341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.230453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.230466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.230596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.230609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.230729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.230741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.230912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.230924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.231038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.231050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.231169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.231181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.231292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.231304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.231415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.231426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.231534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.231547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.231674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.231686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.231799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.231811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.231899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.231911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.232010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.232021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.232139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.232151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.960 qpair failed and we were unable to recover it. 00:35:26.960 [2024-10-06 11:30:24.232406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.960 [2024-10-06 11:30:24.232419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.232594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.232606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.232720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.232732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.232852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.232865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.232981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.232993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.233168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.233181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.233363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.233375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.233479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.233491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.233679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.233691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.233810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.233822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.234033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.234045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.234151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.234163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.234319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.234331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.234449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.234461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.234630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.234641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.234831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.234843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.234939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.234951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.235053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.235072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.235246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.235257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.235444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.235456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.235552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.235563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.235731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.235741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.235909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.235921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.236081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.236095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.236195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.236207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.236390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.236402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.236523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.236538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.236647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.236659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.236781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.236793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.236887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.236899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.237067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.237079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.237213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.237224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.237320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.237332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.237433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.237445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.237612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.237623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.237746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.237758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.237862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.961 [2024-10-06 11:30:24.237875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.961 qpair failed and we were unable to recover it. 00:35:26.961 [2024-10-06 11:30:24.237976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.237988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.238085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.238097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.238263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.238275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.238375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.238386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.238564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.238576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.238672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.238684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.238851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.238862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.238975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.238987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.239115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.239128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.239235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.239247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.239360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.239373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.239501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.239513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.239610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.239621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.239717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.239730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.239829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.239841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.239944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.239956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.240130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.240143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.240273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.240285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.240405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.240417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.240534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.240547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.240732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.240745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.240847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.240859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.240962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.240975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.241085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.241096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.241214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.241226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.241457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.241469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.241589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.241601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.241774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.241786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.241891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.241904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.242026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.242040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.242228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.242240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.242425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.242437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.242560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.242572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.242685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.242698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.242928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.242940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.243115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.243127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.243294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.962 [2024-10-06 11:30:24.243307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.962 qpair failed and we were unable to recover it. 00:35:26.962 [2024-10-06 11:30:24.243476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.243488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.243656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.243669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.243776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.243789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.243906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.243918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.244118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.244131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.244256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.244268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.244386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.244398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.244514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.244526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.244689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.244702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.244889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.244901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.245011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.245024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.245131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.245143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.245259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.245271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.245426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.245439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.245527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.245539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.245762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.245774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.246014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.246026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.246157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.246170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.246351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.246363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.246461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.246473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.246579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.246591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.246758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.246770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.246883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.246895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.246994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.247007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.247127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.247140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.247276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.247288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.247400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.247413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.247513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.247526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.247705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.247717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.247813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.247825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.248001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.248013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.248182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.248195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.963 qpair failed and we were unable to recover it. 00:35:26.963 [2024-10-06 11:30:24.248363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.963 [2024-10-06 11:30:24.248377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.248553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.248566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.248745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.248758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.248940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.248952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.249070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.249082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.249211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.249223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.249335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.249347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.249446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.249459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.249624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.249636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.249803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.249815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.250007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.250020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.250134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.250163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.250237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.250248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.250376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.250389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.250468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.250480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.250575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.250589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.250773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.250786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.250915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.250927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.251037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.251050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.251291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.251303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.251413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.251425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.251622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.251634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.251803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.251817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.251925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.251938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.252174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.252187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.252361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.252373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.252485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.252498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.252617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.252632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.252761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.252773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.252897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.252910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.253092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.253106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.253275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.253287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.253415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.253428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.253604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.253616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.253730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.253742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.253917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.253929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.254114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.964 [2024-10-06 11:30:24.254127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.964 qpair failed and we were unable to recover it. 00:35:26.964 [2024-10-06 11:30:24.254235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.254249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.254423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.254436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.254592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.254605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.254697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.254710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.254885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.254897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.255009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.255022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.255214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.255227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.255396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.255409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.255588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.255601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.255760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.255773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.255888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.255901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.256088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.256102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.256273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.256286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.256407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.256419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.256619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.256633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.256803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.256815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.256983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.256996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.257170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.257184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.257300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.257314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.257476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.257489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.257665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.257678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.257863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.257876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.257997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.258010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.258120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.258133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.258221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.258232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.258407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.258419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.258523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.258536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.258708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.258721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.258829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.258841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.258959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.258972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.259156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.259171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.259273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.259286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.259386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.259398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.259631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.259644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.259755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.259768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.260014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.260027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.260215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.965 [2024-10-06 11:30:24.260228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.965 qpair failed and we were unable to recover it. 00:35:26.965 [2024-10-06 11:30:24.260359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.260373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.260637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.260649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.260756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.260770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.260948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.260961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.261147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.261166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.261287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.261300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.261473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.261485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.261618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.261631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.261816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.261829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.262009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.262021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.262136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.262149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.262319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.262332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.262440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.262453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.262688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.262701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.262810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.262823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.263091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.263104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.263393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.263406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.263571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.263584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.263753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.263766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.264005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.264018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.264138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.264151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.264267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.264280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.264449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.264462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.264665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.264678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.264883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.264897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.265083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.265098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.265215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.265229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.265415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.265429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.265639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.265653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.265834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.265847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.265963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.265976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.266146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.266160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.266264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.266278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.266450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.266466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.266572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.266585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.266721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.266733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.966 [2024-10-06 11:30:24.266863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.966 [2024-10-06 11:30:24.266875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.966 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.267069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.267082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.267193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.267206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.267369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.267383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.267668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.267680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.267850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.267863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.267978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.267991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.268176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.268190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.268366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.268380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.268555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.268568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.268683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.268696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.268898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.268911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.269002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.269015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.269162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.269176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.269346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.269359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.269459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.269473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.269638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.269651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.269768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.269780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.269916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.269929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.270031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.270045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.270235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.270248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.270461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.270475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.270552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.270563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.270693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.270707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.270890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.270904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.271009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.271022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.271231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.271245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.271425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.271438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.271624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.271638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.271816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.271829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.272067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.272080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.272255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.272267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.272386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.272400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.272513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.272526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.272657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.272670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.272835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.272848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.273016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.967 [2024-10-06 11:30:24.273029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.967 qpair failed and we were unable to recover it. 00:35:26.967 [2024-10-06 11:30:24.273144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.273160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.273332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.273345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.273521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.273534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.273644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.273657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.273737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.273749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.273872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.273885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.274071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.274084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.274207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.274220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.274387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.274400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.274581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.274595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.274712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.274725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.274901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.274917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.275091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.275103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.275352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.275363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.275548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.275560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.275723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.275734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.275987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.275999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.276182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.276194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.276303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.276314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.276576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.276588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.276757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.276769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.276879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.276890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.277069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.277081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.277295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.277306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.277587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.277609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.277740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.277757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.277974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.277991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.278233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.278250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.278439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.278455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.278671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.278687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.278889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.278905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.279102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.279118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.968 qpair failed and we were unable to recover it. 00:35:26.968 [2024-10-06 11:30:24.279295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.968 [2024-10-06 11:30:24.279311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.279489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.279505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.279711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.279726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.279943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.279958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.280099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.280115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.280372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.280388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.280581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.280597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.280777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.280792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.281064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.281083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.281214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.281230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.281455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.281471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.281722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.281738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.281928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.281943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.282243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.282260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.282530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.282546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.282789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.282805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.283002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.283018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.283236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.283252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.283498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.283514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.283626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.283641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.283781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.283798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.284054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.284077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.284288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.284305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.284547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.284563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.284806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.284822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.285091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.285107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.285295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.285311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.285496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.285511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.285726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.285741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.285942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.285957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.286182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.286199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.286393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.286408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.286548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.286563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.286693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.286708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.286884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.286899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.287143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.287159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.287405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.969 [2024-10-06 11:30:24.287421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.969 qpair failed and we were unable to recover it. 00:35:26.969 [2024-10-06 11:30:24.287605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.287620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.287830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.287845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.287958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.287973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.288115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.288130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.288413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.288429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.288641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.288656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.288913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.288929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.289116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.289131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.289327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.289343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.289549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.289565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.289767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.289783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.289988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.290007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.290193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.290209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.290352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.290367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.290635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.290651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.290853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.290868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.291088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.291103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.291292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.291308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.291452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.291467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.291691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.291707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.291842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.291857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.292074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.292091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.292312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.292328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.292479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.292495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.292720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.292735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.292928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.292944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.293191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.293207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.293333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.293348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.293587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.293603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.293892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.293908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.294154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.294170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.294295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.294310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.294430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.294446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.294647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.294664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.294848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.294863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.295052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.295074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.295219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.970 [2024-10-06 11:30:24.295236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.970 qpair failed and we were unable to recover it. 00:35:26.970 [2024-10-06 11:30:24.295427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.295444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.295644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.295660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.295800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.295816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.296064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.296080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.296269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.296285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.296497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.296513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.296728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.296744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.296933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.296949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.297220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.297236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.297358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.297374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.297525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.297541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.297807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.297823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.298017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.298033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.298253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.298270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.298409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.298427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.298688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.298704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.298885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.298901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.299164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.299181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.299310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.299325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.299547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.299563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.299776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.299792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.299976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.299991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.300169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.300186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.300380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.300396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.300657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.300673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.300806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.300823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.301070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.301087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.301284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.301299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.301516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.301533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.301767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.301783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.301998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.302014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.302269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.302286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.302455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.302471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.302667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.302683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.302930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.302945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.303135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.303152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.303363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.971 [2024-10-06 11:30:24.303379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.971 qpair failed and we were unable to recover it. 00:35:26.971 [2024-10-06 11:30:24.303557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.303589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.303819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.303851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.304001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.304032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.304286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.304320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.304368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6946a0 (9): Bad file descriptor 00:35:26.972 [2024-10-06 11:30:24.304630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.304665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.304979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.304989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.305233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.305244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.305436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.305446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.305642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.305652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.305954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.305986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.306251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.306285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.306516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.306548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.306793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.306826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.307050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.307104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.307342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.307374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.307541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.307574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.307910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.307942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.308191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.308226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.308462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.308507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.308646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.308656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.308869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.308901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.309193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.309227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.309419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.309452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.309703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.309735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.310006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.310038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.310214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.310247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.310461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.310493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.310817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.310851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.311161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.311195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.311366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.311398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.311685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.311726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.972 [2024-10-06 11:30:24.311912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.972 [2024-10-06 11:30:24.311922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.972 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.312111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.312144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.312315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.312348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.312572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.312611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.312906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.312949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.313244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.313278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.313436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.313468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.313647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.313679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.313979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.314013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.314174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.314208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.314501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.314534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.314860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.314893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.315158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.315191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.315429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.315461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.315732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.315743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.315897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.315907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.316107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.316118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.316307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.316319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.316508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.316518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.316630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.316640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.316844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.316854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.317064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.317074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.317260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.317270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.317389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.317399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.317617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.317628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.317755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.317765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.318036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.318080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.318309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.318341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.318527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.318559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.318848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.318881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.319134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.319168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.319400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.319434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.319640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.319650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.319919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.319952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.320229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.320262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.320495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.320528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.320765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.320798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.973 qpair failed and we were unable to recover it. 00:35:26.973 [2024-10-06 11:30:24.321104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.973 [2024-10-06 11:30:24.321116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.321304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.321314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.321405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.321418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.321516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.321527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.321766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.321798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.321982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.322015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.322236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.322270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.322434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.322466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.322673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.322705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.322882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.322914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.323067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.323078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.323178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.323188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.323377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.323388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.323480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.323491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.323706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.323738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.323905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.323937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.324129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.324163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.324440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.324474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.324708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.324740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.324909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.324920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.325043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.325054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.325170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.325181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.325302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.325313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.325424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.325434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.325615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.325626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.325795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.325828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.326047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.326099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.326265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.326297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.326515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.326547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.326727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.326759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.326924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.326956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.327181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.327216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.327383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.327415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.327582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.327614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.327757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.327767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.327910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.327920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.328022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.328032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.974 qpair failed and we were unable to recover it. 00:35:26.974 [2024-10-06 11:30:24.328224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.974 [2024-10-06 11:30:24.328256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.328427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.328460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.328739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.328771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.328928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.328938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.329055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.329069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.329263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.329276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.329444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.329455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.329560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.329585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.329712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.329744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.329912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.329944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.330170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.330204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.330377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.330410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.330631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.330664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.330808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.330852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.330952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.330962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.331215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.331250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.331410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.331443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.331679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.331715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.331830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.331840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.331959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.331969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.332137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.332148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.332332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.332343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.332513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.332545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.332766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.332799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.333028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.333083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.333242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.333273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.333436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.333469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.333680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.333714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.333989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.334021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.334316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.334350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.334563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.334596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.334812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.334845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.335002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.335035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.335200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.335234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.335487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.335519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.335662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.335694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.335917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.975 [2024-10-06 11:30:24.335927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.975 qpair failed and we were unable to recover it. 00:35:26.975 [2024-10-06 11:30:24.336120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.336154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.336373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.336406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.336622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.336653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.336936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.336969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.337149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.337182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.337342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.337374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.337670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.337702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.337914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.337959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.338072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.338084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.338241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.338251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.338360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.338370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.338474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.338484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.338590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.338600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.338717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.338727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.338908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.338918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.339011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.339022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.339096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.339106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.339217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.339227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.339429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.339438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.339561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.339571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.339692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.339703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.339809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.339819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.340006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.340016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.340180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.340190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.340354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.340364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.340578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.340588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.340704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.340715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.340880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.340890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.340997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.341006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.341088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.341098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.341189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.341199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.341400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.341410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.341589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.341599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.341693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.341702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.341960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.341971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.342172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.342183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.342312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.976 [2024-10-06 11:30:24.342322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.976 qpair failed and we were unable to recover it. 00:35:26.976 [2024-10-06 11:30:24.342512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.342522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.342642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.342653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.342763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.342773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.343016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.343026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.343164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.343175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.343281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.343291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.343458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.343468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.343640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.343651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.343772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.343782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.343878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.343888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.344051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.344065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.344179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.344191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.344300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.344311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.344447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.344457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.344612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.344622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.344747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.344757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.344854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.344864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.345040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.345051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.345168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.345178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.345291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.345301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.345400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.345410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.345546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.345556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.345657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.345667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.345845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.345855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.346020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.346030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.346144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.346155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.346261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.346272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.346364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.346374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.346488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.346499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.346677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.346687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.346863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.346873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.346992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.347002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.977 [2024-10-06 11:30:24.347198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.977 [2024-10-06 11:30:24.347208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.977 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.347334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.347344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.347459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.347470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.347602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.347612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.347707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.347717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.347953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.347964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.348077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.348088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.348198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.348208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.348328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.348339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.348450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.348460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.348702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.348712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.348827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.348837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.348958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.348968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.349156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.349166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.349276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.349286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.349396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.349406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.349534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.349545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.349639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.349649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.349748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.349757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.349878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.349890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.350057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.350074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.350175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.350185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.350293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.350304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.350469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.350479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.350579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.350590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.350689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.350699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.350813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.350823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.350935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.350945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.351128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.351139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.351248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.351258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.351421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.351431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.351556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.351566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.351803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.351813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.351987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.351998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.352102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.352113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.352219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.352229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.352404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.978 [2024-10-06 11:30:24.352414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.978 qpair failed and we were unable to recover it. 00:35:26.978 [2024-10-06 11:30:24.352520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.352531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.352640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.352649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.352752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.352762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.352963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.352974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.353093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.353103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.353215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.353226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.353324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.353334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.353506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.353517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.353614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.353624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.353805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.353815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.353916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.353926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.354139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.354150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.354325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.354336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.354460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.354470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.354559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.354569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.354666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.354676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.354826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.354836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.354949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.354959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.355142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.355153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.355268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.355278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.355513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.355523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.355632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.355642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.355773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.355786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.355957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.355968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.356115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.356125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.356201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.356211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.356309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.356319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.356412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.356422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.356533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.356543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.356653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.356663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.356829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.356839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.357051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.357066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.357177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.357188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.357357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.357368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.357466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.357476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.357574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.357584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.357748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.979 [2024-10-06 11:30:24.357759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.979 qpair failed and we were unable to recover it. 00:35:26.979 [2024-10-06 11:30:24.357864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.357874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.357983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.357993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.358234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.358245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.358377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.358388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.358612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.358622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.358871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.358881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.359072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.359083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.359315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.359325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.359460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.359470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.359572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.359582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.359789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.359800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.359984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.359994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.360185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.360196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.360327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.360337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.360546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.360556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.360673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.360684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.360800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.360809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.360931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.360941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.361068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.361079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.361250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.361261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.361379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.361390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.361625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.361636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.361772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.361782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.361948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.361958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.362136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.362147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.362333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.362346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.362529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.362539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.362774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.362784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.980 [2024-10-06 11:30:24.362951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.980 [2024-10-06 11:30:24.362961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.980 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.363135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.363146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.363340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.363350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.363486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.363497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.363672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.363682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.363891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.363901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.364085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.364095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.364276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.364287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.364488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.364498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.364615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.364624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.364920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.364930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.365159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.365170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.365355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.365365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.365499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.365509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.365698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.365708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.365825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.365835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.366014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.366024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.366287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.366298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.366417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.366427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.366552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.366562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.366693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.366703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.366949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.366959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.367133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.367144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.367287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.367297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.367430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.367440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.367559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.367569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.367752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.367762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.367945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.367955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.368137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.368147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.368290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.368300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.368497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.368507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.368621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.368632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.368870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.368880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.369089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.369100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.369276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.369287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.369384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.369394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.369500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.981 [2024-10-06 11:30:24.369509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.981 qpair failed and we were unable to recover it. 00:35:26.981 [2024-10-06 11:30:24.369641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.369653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.369818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.369829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.369989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.369999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.370109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.370121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.370261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.370271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.370453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.370464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.370651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.370661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.370824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.370834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.371109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.371120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.371250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.371260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.371392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.371403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.371517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.371527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.371736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.371746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.371914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.371925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.372162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.372172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.372300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.372310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.372502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.372512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.372632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.372642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.372758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.372769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.372877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.372888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.373126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.373136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.373360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.373371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.373502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.373512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.373700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.373711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.373884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.373895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.374006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.374017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.374185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.374196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.374482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.374493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.374611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.374622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.374930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.374940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.375122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.375133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.982 [2024-10-06 11:30:24.375363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.982 [2024-10-06 11:30:24.375373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.982 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.375495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.375505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.375717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.375727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.375978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.375988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.376257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.376267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.376385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.376395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.376586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.376596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.376806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.376816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.377062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.377073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.377189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.377201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.377333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.377343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.377462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.377472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.377659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.377668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.377850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.377860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.378047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.378057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.378196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.378206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.378337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.378347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.378465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.378475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.378594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.378604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.378839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.378849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.379023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.379033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.379142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.379153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.379341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.379351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.379537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.379547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.379674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.379684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.379937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.379947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.380143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.380154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.380332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.380342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.380476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.380486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.380604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.380614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.380814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.380825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.381102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.381112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.381285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.381294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.381537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.381547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.381755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.381766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.382021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.382031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.382279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.382290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.983 qpair failed and we were unable to recover it. 00:35:26.983 [2024-10-06 11:30:24.382549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.983 [2024-10-06 11:30:24.382559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.382696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.382706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.382955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.382965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.383186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.383197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.383360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.383371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.383626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.383636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.383855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.383865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.384113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.384123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.384246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.384255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.384487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.384497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.384612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.384622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.384921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.384930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.385141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.385154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.385339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.385349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.385530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.385541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.385809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.385819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.385935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.385945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.386066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.386077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.386280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.386291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.386515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.386525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.386696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.386706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.386885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.386895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.387096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.387107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.387297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.387307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.387419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.387429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.387637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.387648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.387854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.387864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.388100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.388110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.388227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.388237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.388447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.388457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.388777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.388787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.388916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.388926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.389137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.389147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.389288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.389298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.389486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.389497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.389771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.389782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.390002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.390013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.984 qpair failed and we were unable to recover it. 00:35:26.984 [2024-10-06 11:30:24.390132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.984 [2024-10-06 11:30:24.390143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.390327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.390337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.390598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.390608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.390817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.390827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.390993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.391003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.391212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.391223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.391500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.391511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.391696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.391706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.391907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.391917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.392027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.392037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.392232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.392242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.392420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.392431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.392670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.392680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.392949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.392959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.393239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.393250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.393425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.393436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.393627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.393637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.393831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.393841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.394077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.394088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.394309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.394319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.394454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.394464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.394699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.394709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.394886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.394896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.395085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.395095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.395229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.395240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.395387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.395397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.395523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.395534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.395788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.395797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.395990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.396000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.396215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.396226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.396345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.396356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.396608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.396619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.396811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.396821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.397053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.397068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.397289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.397300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.397479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.397490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.397629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.985 [2024-10-06 11:30:24.397639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.985 qpair failed and we were unable to recover it. 00:35:26.985 [2024-10-06 11:30:24.397744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.397755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.397879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.397889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.398094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.398104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.398202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.398211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.398389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.398400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.398607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.398621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.398808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.398818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.399104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.399115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.399358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.399368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.399553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.399563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.399691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.399701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.399876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.399886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.400083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.400093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.400271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.400281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.400514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.400524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.400688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.400698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.400825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.400836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.401068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.401078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.401333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.401343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.401483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.401494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.401737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.401747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.401929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.401939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.402190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.402200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.402336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.402346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.402507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.402518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.402629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.402639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.402887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.402898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.403006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.403017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.403231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.403242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.403430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.403440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.403688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.403699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.403916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.403926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.404094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.404106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.404225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.404235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.404368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.404378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.404552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.404562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.404668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.986 [2024-10-06 11:30:24.404679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.986 qpair failed and we were unable to recover it. 00:35:26.986 [2024-10-06 11:30:24.404979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.404989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.405249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.405259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.405459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.405468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.405693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.405703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.405903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.405913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.406166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.406176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.406378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.406389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.406561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.406572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.406759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.406772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.407036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.407046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.407190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.407202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.407386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.407397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.407583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.407593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.407717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.407728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.407889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.407899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.408115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.408126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.408333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.408344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.408577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.408587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.408787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.408797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.408999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.409009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.409212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.409222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.409341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.409352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.409586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.409596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.409787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.409798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.410032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.410042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.410273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.410284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.410422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.410431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.410568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.410578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.410865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.410875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.411065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.411076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.411327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.411337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.411462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.411472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.987 [2024-10-06 11:30:24.411593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.987 [2024-10-06 11:30:24.411604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.987 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.411789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.411800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.411929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.411940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.412136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.412148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.412349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.412360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.412544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.412555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.412774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.412785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.412964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.412975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.413189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.413199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.413331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.413342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.413516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.413527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.413736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.413746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.413950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.413960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.414139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.414150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.414290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.414300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.414465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.414475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.414679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.414692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.414925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.414935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.415164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.415175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.415430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.415440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.415639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.415649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.415907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.415917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.416214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.416225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.416409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.416419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.416584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.416594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.416795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.416805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.416976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.416986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.417168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.417179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.417363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.417374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.417560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.417570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.417703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.417713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.417840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.417851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.418064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.418074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.418200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.418209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.418339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.418350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.418478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.418488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.418677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.418687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.988 qpair failed and we were unable to recover it. 00:35:26.988 [2024-10-06 11:30:24.418855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.988 [2024-10-06 11:30:24.418865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.419005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.419016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.419141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.419152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.419359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.419370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.419496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.419506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.419807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.419818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.420099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.420111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.420297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.420308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.420422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.420432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.420610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.420620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.420816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.420826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.421105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.421117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.421391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.421402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.421637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.421647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.421810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.421820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.421988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.421998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.422172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.422183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.422356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.422366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.422538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.422549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.422664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.422676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.422896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.422906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.423189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.423199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.423434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.423444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.423553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.423563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.423834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.423844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.424033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.424044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.424230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.424241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.424417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.424427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.424610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.424620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.424811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.424821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.425010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.425020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.425185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.425196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.425303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.425312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.425439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.425449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.425563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.425574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.425755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.425766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.426008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.989 [2024-10-06 11:30:24.426018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.989 qpair failed and we were unable to recover it. 00:35:26.989 [2024-10-06 11:30:24.426177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.426188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.426309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.426319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.426573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.426583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.426893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.426902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.427161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.427171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.427403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.427414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.427668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.427678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.427948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.427958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.428239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.428250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.428419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.428430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.428617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.428627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.428937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.428948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.429235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.429246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.429423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.429433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.429666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.429677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.429839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.429849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.430014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.430024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.430215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.430226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.430326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.430336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.430557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.430568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.430748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.430758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.430964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.430974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.431169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.431182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.431294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.431304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.431544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.431555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.431679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.431689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.431913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.431923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.432039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.432049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.432250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.432261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.432385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.432394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.432593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.432603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.432812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.432822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.432983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.432993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.433110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.433121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.433318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.433328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.433523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.433533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.433772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.433783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.434040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.434051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.434303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.434313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.990 qpair failed and we were unable to recover it. 00:35:26.990 [2024-10-06 11:30:24.434498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.990 [2024-10-06 11:30:24.434508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.434691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.434701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.434900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.434910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.435092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.435102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.435275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.435284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.435416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.435427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.435610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.435620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.435865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.435875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.436075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.436085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.436237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.436247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.436439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.436450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.436642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.436653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.436911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.436921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.437230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.437241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.437452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.437462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.437640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.437650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.437778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.437787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.437991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.438001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.438181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.438192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.438441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.438451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.438590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.438601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.438857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.438868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.439044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.439054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.439295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.439310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.439544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.439554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.439838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.439848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.440099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.440110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.440307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.440317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.440444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.440453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.440594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.440604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.440872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.440881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.441085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.991 [2024-10-06 11:30:24.441095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.991 qpair failed and we were unable to recover it. 00:35:26.991 [2024-10-06 11:30:24.441356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.441367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.441553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.441563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.441700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.441730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.441948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.441978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.442251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.442284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.442519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.442550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.442772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.442802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.443044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.443088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.443364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.443397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.443658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.443689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.443966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.443998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.444352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.444385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.444617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.444649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.444963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.444995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.445155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.445187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.445419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.445429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.445613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.445623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.445812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.445843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.446073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.446106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.446326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.446357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.446524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.446556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.446889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.446930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.447043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.447053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.447255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.447287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.447547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.447579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.447871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.447902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.448209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.448242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.448470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.448480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.448614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.448644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.448955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.448986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.449234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.449268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.449426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.449463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.449686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.449717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.449921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.449931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.450142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.450152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.450361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.450392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.450568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.450600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.450881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.450915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.451158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.992 [2024-10-06 11:30:24.451169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.992 qpair failed and we were unable to recover it. 00:35:26.992 [2024-10-06 11:30:24.451357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.451367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.451588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.451619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.451886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.451917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.452165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.452176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.452353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.452384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.452614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.452645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.452869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.452909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.453090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.453101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.453399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.453430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.453611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.453642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.453834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.453864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.454073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.454083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.454259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.454290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.454539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.454570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.454742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.454773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.454939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.454970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.455252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.455284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.455515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.455546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.455868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.455900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.456129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.456163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.456378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.456409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.456720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.456753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.456973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.456983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.457237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.457247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.457388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.457398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.457645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.457677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.457899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.457909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.458167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.458178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.458358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.458368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.458567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.458598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.458771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.458803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.459112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.459145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.459368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.459405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.459630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.459662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.459962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.459993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.460290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.460301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.460619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.460651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.460971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.461003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.461274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.461285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.993 [2024-10-06 11:30:24.461477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.993 [2024-10-06 11:30:24.461488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.993 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.461733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.461742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.461864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.461888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.462219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.462253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.462523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.462554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.462914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.462945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.463234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.463268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.463544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.463576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.463895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.463933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.464057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.464072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.464198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.464208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.464440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.464450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.464622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.464632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.464897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.464928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.465138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.465171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.465337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.465369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.465646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.465678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.466008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.466039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.466345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.466379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.466615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.466647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.466902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.466934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.467160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.467194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.467443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.467475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.467650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.467681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.467961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.467993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.468302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.468340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.468581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.468591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.468862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.468873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.469121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.469132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.469299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.469310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.469504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.469536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.469765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.469797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.470027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.470077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.470306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.470319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.470498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.470530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.470910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.470941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.471113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.471125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.471247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.471256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.471375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.471385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.471517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.471526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.994 [2024-10-06 11:30:24.471671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.994 [2024-10-06 11:30:24.471700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.994 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.471978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.472010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.472281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.472315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.472493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.472524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.472755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.472786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.473092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.473126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.473295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.473328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.473563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.473595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.473852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.473883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.474130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.474164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.474459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.474491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.474742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.474775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.475014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.475047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.475296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.475306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.475540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.475549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.475797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.475807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.476015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.476025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.476215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.476225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.476416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.476448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.476683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.476713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.477012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.477045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.477401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.477434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.477665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.477696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.477930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.477961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.478193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.478227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.478538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.478548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.478679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.478689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.478943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.478953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.479078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.479088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.479267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.479277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.479509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.479541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.479790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.479821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.480049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.480066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.480242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.480254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.480520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.480552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.480839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.480871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.481133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.481143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.481402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.481434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.481669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.481702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.481952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.481993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.995 qpair failed and we were unable to recover it. 00:35:26.995 [2024-10-06 11:30:24.482250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.995 [2024-10-06 11:30:24.482260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.482446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.482456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.482653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.482684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.482903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.482934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.483246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.483280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.483574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.483605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.483915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.483947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.484239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.484273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.484495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.484526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.484833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.484877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.485119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.485130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.485325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.485335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.485588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.485597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.485706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.485717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.485915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.485924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.486200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.486233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.486528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.486560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.486801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.486832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.487007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.487037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.487362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.487396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.487546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.487578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.487816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.487848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.488147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.488158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.488364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.488374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.488522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.488532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.488864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.996 [2024-10-06 11:30:24.488897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.996 qpair failed and we were unable to recover it. 00:35:26.996 [2024-10-06 11:30:24.489200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.489240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.489418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.489428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.489570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.489601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.489962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.489994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.490276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.490286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.490547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.490557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.490848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.490880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.491182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.491226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.491501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.491511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.491762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.491772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.491968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.491978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.492189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.492209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.492330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.492340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.492467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.492478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.492713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.492724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.492938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.492948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.493181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.493192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.493312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.493322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.493444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.493455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.493621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.493631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.493930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.493963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.494215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.494248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.494584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.494594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.494889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.494922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.495170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.495203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.495377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.495388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.495565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.495597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.495896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.495927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.496095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.496128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.496367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.496377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.496594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.496604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.496781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.496791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:26.997 [2024-10-06 11:30:24.497034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.997 [2024-10-06 11:30:24.497077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:26.997 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.498388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.498411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.498633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.498644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.498822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.498832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.499106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.499117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.499325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.499334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.499527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.499537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.499729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.499759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.499988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.500018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.500190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.500223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.500461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.500472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.500749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.500760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.500940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.500951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.501217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.501250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.501487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.501518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.501779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.501817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.502079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.502112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.502290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.502321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.502501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.502512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.502646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.502657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.502892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.502903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.503073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.503084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.503277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.503307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.503533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.503564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.503869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.503901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.504150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.504183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.504465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.504486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.504598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.504608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.504883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.504914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.505242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.505276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.505544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.505554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.505726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.505736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.506019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.506049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.287 [2024-10-06 11:30:24.506303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.287 [2024-10-06 11:30:24.506335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.287 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.506557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.506567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.506699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.506730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.506946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.506978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.507283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.507317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.507492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.507523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.507768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.507799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.508048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.508106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.508271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.508303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.508563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.508573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.508901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.508933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.509247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.509280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.509558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.509568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.509826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.509836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.510073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.510083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.510254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.510264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.510407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.510437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.510604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.510635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.510865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.510897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.511103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.511114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.511379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.511411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.511723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.511756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.512048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.512091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.512330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.512366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.512551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.512562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.512684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.512694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.512876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.512886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.513071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.513082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.513248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.513280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.513572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.513604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.513832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.513864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.514107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.514140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.514370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.514402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.514642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.514674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.514968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.515000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.515304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.515337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.515520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.515530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.515736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.515747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.515931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.515942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.516267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.516301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.516539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.516572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.516849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.516881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.517139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.517149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.517285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.517316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.517525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.517557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.517828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.517860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.518104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.518137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.518373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.518405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.518668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.518700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.519048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.519115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.519336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.519368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.519629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.519640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.519891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.519901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.520088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.520099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.520230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.520241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.520425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.520435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.520713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.520724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.520959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.520969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.521160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.521170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.521346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.521356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.521572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.521604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.521800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.521833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.522046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.522088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.522341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.522351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.522567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.522599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.522768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.522798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.523139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.523172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.523427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.523458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.523745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.523777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.288 [2024-10-06 11:30:24.523955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.288 [2024-10-06 11:30:24.523987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.288 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.524284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.524295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.524417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.524428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.524613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.524622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.524748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.524757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.524939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.524949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.525052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.525068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.525194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.525205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.525374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.525383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.525516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.525547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.525792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.525825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.526116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.526148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.526389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.526423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.526605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.526637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.526863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.526896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.527224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.527234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.527353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.527363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.527548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.527558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.527868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.527899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.528132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.528166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.528382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.528420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.528611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.528622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.528964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.528974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.529169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.529179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.529384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.529394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.529623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.529633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.529897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.529907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.530092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.530102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.530216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.530226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.530416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.530427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.530616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.530627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.530901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.530911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.531124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.531156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.531338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.531370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.531614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.531646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.531959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.531991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.532283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.532317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.532482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.532493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.532629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.532639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.532927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.532937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.533119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.533129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.533359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.533370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.533562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.533572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.533764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.533774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.534012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.534022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.534212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.534224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.534445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.534476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.534791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.534824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.535054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.535096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.535333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.535365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.535538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.535569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.535830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.535863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.536127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.536160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.536345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.536376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.536713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.536745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.536970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.537002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.537236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.537269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.537493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.537526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.537845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.537876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.538087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.538120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.538370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.538382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.538563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.538575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.538716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.538748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.538967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.538999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.539235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.539268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.539589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.539599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.539923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.539955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.289 [2024-10-06 11:30:24.540270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.289 [2024-10-06 11:30:24.540304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.289 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.540535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.540545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.540730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.540763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.540975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.541007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.541240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.541274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.541535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.541568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.541806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.541837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.542088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.542122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.542304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.542338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.542577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.542587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.542796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.542806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.543100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.543111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.543363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.543373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.543608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.543618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.543873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.543883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.544118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.544129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.544322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.544332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.544573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.544583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.544700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.544711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.544992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.545024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.545293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.545327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.545557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.545568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.545705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.545737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.545961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.545993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.546305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.546338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.546570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.546602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.546940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.546983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.547222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.547255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.547485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.547495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.547775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.547806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.548103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.548136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.548367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.548377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.548636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.548646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.548833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.548846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.549118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.549152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.549389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.549420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.549671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.549704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.550009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.550053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.550227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.550238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.550501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.550534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.550693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.550724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.551020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.551052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.551302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.551334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.551561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.551571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.551788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.551820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.552135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.552169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.552335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.552345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.552460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.552470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.552670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.552680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.552941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.552975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.553217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.553251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.553471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.553481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.553625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.553657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.553969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.554000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.554244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.554277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.554519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.554551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.554737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.554770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.555069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.555102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.555337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.555370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.555682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.290 [2024-10-06 11:30:24.555693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.290 qpair failed and we were unable to recover it. 00:35:27.290 [2024-10-06 11:30:24.556020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.556052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.556386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.556418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.556654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.556686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.556901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.556933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.557137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.557147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.557325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.557358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.557598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.557630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.557940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.557972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.558205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.558215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.558450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.558482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.558716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.558748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.559043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.559087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.559337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.559368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.559612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.559650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.559924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.559956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.560163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.560174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.560456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.560487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.560764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.560797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.561028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.561071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.561308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.561341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.561578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.561588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.561773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.561784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.561984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.562016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.562250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.562283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.562495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.562526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.562811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.562842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.563132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.563166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.563403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.563436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.563795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.563827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.564076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.564110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.564335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.564366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.564543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.564575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.564831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.564863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.565079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.565111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.565275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.565309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.565617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.565649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.565924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.565957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.566186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.566196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.566317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.566327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.566570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.566602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.566903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.566935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.567159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.567170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.567381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.567413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.567716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.567747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.568045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.568087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.568295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.568305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.568493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.568526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.568718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.568751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.569081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.569115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.569366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.569397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.569570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.569581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.569755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.569765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.570068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.570078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.570214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.570226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.570410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.570420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.570606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.570639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.570904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.570937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.571273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.571306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.571521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.571531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.571702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.571712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.571979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.572011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.572322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.572332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.572594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.572605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.572827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.572838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.573031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.573042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.573311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.573321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.573533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.573544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.573842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.573871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.574134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.574168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.574442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.291 [2024-10-06 11:30:24.574452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.291 qpair failed and we were unable to recover it. 00:35:27.291 [2024-10-06 11:30:24.574677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.574688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.574873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.574884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.575192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.575225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.575465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.575497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.575750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.575761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.575901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.575912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.576171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.576204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.576452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.576462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.576694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.576726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.577032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.577073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.577357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.577367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.577605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.577614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.577837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.577848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.578110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.578121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.578258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.578268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.578422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.578453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.578647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.578679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.578888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.578919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.579227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.579261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.579437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.579469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.579784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.579795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.580098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.580108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.580245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.580255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.580399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.580411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.580631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.580663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.580858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.580890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.581172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.581204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.581430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.581461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.581655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.581687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.582014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.582045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.582270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.582303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.582472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.582504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.582671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.582681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.582951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.582983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.583234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.583267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.583453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.583484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.583759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.583789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.584042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.584087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.584220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.584230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.584409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.584440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.584667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.584699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.584916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.584947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.585238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.585271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.585441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.585452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.585584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.585594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.585786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.585797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.586044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.586088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.586344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.586375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.586620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.586650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.586936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.586968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.587200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.587234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.587416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.587447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.587735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.587767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.588014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.588046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.588319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.588351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.588636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.588669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.588845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.588877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.589222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.589255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.589536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.589546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.292 qpair failed and we were unable to recover it. 00:35:27.292 [2024-10-06 11:30:24.589690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.292 [2024-10-06 11:30:24.589701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.589907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.589918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.590183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.590194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.590316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.590326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.590520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.590533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.590673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.590683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.590920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.590930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.591127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.591139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.591329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.591339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.591587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.591597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.591782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.591792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.591926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.591936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.592147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.592157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.592297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.592307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.592584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.592595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.592726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.592736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.592978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.592987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.593188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.593199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.593323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.593334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.593505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.593516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.593631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.593642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.593759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.593770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.593981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.593992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.594171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.594182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.594397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.594407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.594545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.594555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.594800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.594811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.594980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.594990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.595195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.595206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.595328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.595339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.595530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.595540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.595755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.595766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.595897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.595908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.596094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.596105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.596280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.596291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.596420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.596431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.596717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.596727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.597013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.597023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.597286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.597297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.597558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.597569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.597759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.597769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.597904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.597914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.598112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.598124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.598317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.598327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.598508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.598519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.598659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.598668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.598860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.598871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.599057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.599075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.599307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.599318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.599448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.599458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.599582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.599593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.599721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.599731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.599991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.600001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.600180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.600192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.600335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.600345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.600477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.600486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.600604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.600616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.600829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.600838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.601036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.601047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.601233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.601245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.601432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.601442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.601642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.601652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.601861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.601871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.602050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.602066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.602203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.602214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.293 [2024-10-06 11:30:24.602407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.293 [2024-10-06 11:30:24.602418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.293 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.602650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.602661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.602772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.602783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.603017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.603027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.603209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.603218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.603421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.603431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.603568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.603579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.603770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.603780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.604018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.604028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.604136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.604147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.604336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.604347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.604512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.604522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.604836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.604847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.605015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.605025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.605264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.605274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.605450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.605461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.605642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.605652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.605861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.605870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.606052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.606068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.606256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.606269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.606455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.606466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.606676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.606686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.606944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.606955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.607216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.607227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.607352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.607363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.607508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.607519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.607650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.607661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.607902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.607913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.608105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.608115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.608263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.608273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.608527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.608537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.608799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.608809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.609067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.609078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.609323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.609333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.609469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.609479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.609682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.609692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.609895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.609906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.610166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.610177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.610294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.610304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.610508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.610518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.610657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.610667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.610873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.610883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.611070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.611081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.611212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.611222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.611332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.611342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.611480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.611490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.611702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.611713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.611825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.611836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.612116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.612127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.612302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.612313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.612489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.612499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.612634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.612645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.612839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.612849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.613017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.613027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.613156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.613168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.613285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.613295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.613417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.613427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.613626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.613636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.613832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.613842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.613970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.613983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.614247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.614259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.614440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.614450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.614566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.614575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.614710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.614721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.614926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.614936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.615147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.615158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.615290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.615300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.615517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.615527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.615803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.615814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.615991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.616001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.616186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.616197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.616382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.616392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.616582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.616593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.616786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.616797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.616994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.294 [2024-10-06 11:30:24.617005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.294 qpair failed and we were unable to recover it. 00:35:27.294 [2024-10-06 11:30:24.617193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.617204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.617465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.617475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.617586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.617596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.617783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.617793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.617920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.617930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.618199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.618209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.618342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.618352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.618490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.618501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.618723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.618733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.618936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.618946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.619140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.619151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.619291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.619301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.619487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.619497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.619671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.619682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.619963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.619973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.620224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.620235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.620461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.620472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.620661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.620671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.620886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.620896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.621012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.621022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.621214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.621225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.621462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.621472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.621651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.621661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.621928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.621938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.622139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.622151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.622322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.622333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.622518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.622528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.622661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.622671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.622933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.622943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.623130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.623140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.623326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.623336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.623506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.623516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.623786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.623796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.623919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.623929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.624049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.624070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.624281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.624291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.624436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.624446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.624651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.624661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.624894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.624904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.625094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.625105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.625339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.625349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.625533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.625543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.625814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.625824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.626074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.626085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.626319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.626329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.626465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.626475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.626610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.626620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.626736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.626746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.626986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.626996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.627234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.627244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.627413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.627423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.627593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.627604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.627893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.627903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.628103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.628114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.628302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.628312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.628520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.628530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.628808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.628818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.629089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.629100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.629219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.629229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.629460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.629470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.629673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.629683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.629913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.629923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.630159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.630169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.630371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.630381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.630574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.630584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.630836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.630846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.631103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.631114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.631225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.631236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.631362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.631372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.631549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.631559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.631689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.631698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.631930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.631941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.632119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.632130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.632309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.632320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.632505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.632515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.632703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.632713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.632905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.632916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.633167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.633177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.633421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.633431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.295 qpair failed and we were unable to recover it. 00:35:27.295 [2024-10-06 11:30:24.633667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.295 [2024-10-06 11:30:24.633677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.633911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.633921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.634113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.634123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.634257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.634267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.634445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.634455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.634597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.634607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.634714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.634724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.634957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.634967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.635155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.635166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.635334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.635344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.635467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.635478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.635654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.635664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.635923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.635937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.636172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.636182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.636369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.636379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.636548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.636559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.636743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.636754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.636949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.636959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.637140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.637150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.637283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.637294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.637551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.637561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.637729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.637739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.637977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.637987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.638234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.638245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.638345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.638355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.638536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.638547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.638777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.638787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.639039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.639049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.639271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.639309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.639501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.639519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.639786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.639802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.640055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.640078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.640217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.640233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.640483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.640498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.640673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.640689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.640950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.640965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.641278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.641294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.641513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.641529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.641823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.641838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.641989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.642027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.642311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.642323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.642612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.642622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.642879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.642889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.643068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.643079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.643260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.643270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.643527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.643537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.643718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.643729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.643912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.643923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.644094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.644104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.644268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.644278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.644547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.644558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.644805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.644815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.644991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.645004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.645207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.645218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.645406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.645416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.645677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.645687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.645816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.645826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.646097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.646108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.646226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.646237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.646472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.646483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.646742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.646752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.647018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.647028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.647223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.647233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.647430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.647441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.647625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.647635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.647838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.647849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.648065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.648076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.648258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.648268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.648398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.648408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.648575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.648586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.648745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.648756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.649043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.649054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.649292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.649302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.649431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.649441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.649707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.649717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.649960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.649970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.650215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.650226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.650404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.650415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.650643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.650653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.650820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.650830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.651098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.651109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.651353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.651363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.651544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.651554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.651757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.651767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.651997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.652008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.652169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.652179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.296 [2024-10-06 11:30:24.652360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.296 [2024-10-06 11:30:24.652371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.296 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.652560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.652570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.652753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.652763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.652955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.652966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.653229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.653240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.653473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.653483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.653685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.653697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.653947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.653957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.654222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.654233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.654436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.654447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.654626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.654636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.654823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.654834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.655106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.655117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.655383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.655393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.655625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.655635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.655819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.655830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.656023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.656033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.656287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.656298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.656557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.656568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.656770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.656780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.656989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.656999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.657281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.657292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.657491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.657501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.657688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.657697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.657879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.657889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.658067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.658077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.658262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.658273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.658471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.658481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.658603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.658613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.658792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.658803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.658977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.658987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.659169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.659180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.659385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.659395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.659635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.659645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.659911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.659921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.660048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.660066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.660339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.660350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.660555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.660565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.660731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.660742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.660946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.660956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.661069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.661079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.661336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.661347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.661513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.661523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.661753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.661763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.661996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.662006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.662283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.662293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.662473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.662486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.662614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.662624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.662879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.662890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.663133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.663144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.663276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.663286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.663485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.663496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.663775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.663786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.664075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.664086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.664251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.664261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.664490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.664500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.664699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.664709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.664894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.664904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.665077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.665087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.665349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.665359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.665616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.665627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.665880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.665890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.666055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.666069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.666304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.666315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.666556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.666567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.666845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.666855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.667099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.667109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.667239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.667249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.667479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.667490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.667722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.667733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.667993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.668003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.668235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.668245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.668522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.668533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.668793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.668803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.669032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.669042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.669306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.669317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.669431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.669441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.669700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.669710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.669877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.669887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.670057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.297 [2024-10-06 11:30:24.670071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.297 qpair failed and we were unable to recover it. 00:35:27.297 [2024-10-06 11:30:24.670185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.670195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.670476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.670486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.670586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.670597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.670770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.670780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.670965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.670975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.671240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.671250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.671435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.671445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.671557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.671567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.671813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.671823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.672086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.672097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.672283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.672294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.672547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.672557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.672848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.672859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.673048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.673063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.673317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.673327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.673609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.673619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.673880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.673891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.674016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.674026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.674295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.674305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.674542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.674552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.674785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.674795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.674990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.675000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.675311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.675322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.675578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.675588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.675869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.675879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.675993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.676003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.676265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.676275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.676460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.676470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.676669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.676679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.676853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.676863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.677046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.677056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.677320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.677330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.677533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.677543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.677737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.677750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.677995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.678006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.678174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.678184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.678444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.678454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.678713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.678723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.678855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.678866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.678964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.678974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.679105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.679116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.679374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.679384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.679651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.679661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.679937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.679947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.680070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.680080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.680258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.680268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.680530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.680541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.680805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.680816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.680991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.681002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.681257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.681267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.681449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.681459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.681580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.681590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.681706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.681716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.681967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.681977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.682207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.682218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.682450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.682460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.682715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.682725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.682981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.682991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.683154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.683164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.683418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.683429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.683683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.683694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.683936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.683947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.684075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.684085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.684264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.684274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.684467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.684478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.684720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.684730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.684986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.684996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.685181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.685192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.685361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.685371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.685648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.685659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.685914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.685925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.686040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.686051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.686310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.686321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.686513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.686525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.686781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.686791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.686991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.687001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.687207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.687218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.687431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.687441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.687605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.687615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.687873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.687884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.688044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.688054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.688242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.688252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.688420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.688430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.688617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.298 [2024-10-06 11:30:24.688627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.298 qpair failed and we were unable to recover it. 00:35:27.298 [2024-10-06 11:30:24.688856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.688866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.689098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.689109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.689393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.689404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.689656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.689666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.689902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.689913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.690175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.690185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.690438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.690448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.690683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.690693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.690946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.690956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.691216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.691226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.691410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.691420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.691678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.691689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.691944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.691954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.692196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.692207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.692384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.692395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.692650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.692660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.692797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.692808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.693012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.693023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.693268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.693279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.693535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.693545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.693776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.693786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.694015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.694025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.694302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.694314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.694559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.694569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.694819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.694829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.695084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.695095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.695277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.695288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.695541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.695551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.695807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.695817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.696057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.696078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.696324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.696335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.696502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.696512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.696686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.696696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.696995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.697005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.697249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.697261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.697515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.697525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.697782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.697792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.698032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.698042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.698297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.698307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.698430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.698441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.698698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.698708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.698878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.698888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.699145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.699156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.699417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.699428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.699659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.699670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.699933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.699943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.700109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.700120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.700297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.700308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.700435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.700445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.700630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.700641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.700771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.700782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.700949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.700959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.701217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.701228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.701414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.701424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.701676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.701686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.701877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.701888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.702193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.702203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.702404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.702414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.702648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.702659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.702852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.702863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.703092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.703103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.703286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.703296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.703471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.703482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.703715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.703725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.703895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.703905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.704131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.704142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.704308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.704318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.704496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.704507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.704788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.704798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.705099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.705112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.705388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.705398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.705575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.705586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.705716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.705726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.705981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.705991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.706119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.706129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.706395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.706405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.706708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.706719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.706883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.706894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.299 qpair failed and we were unable to recover it. 00:35:27.299 [2024-10-06 11:30:24.707130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.299 [2024-10-06 11:30:24.707141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.707394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.707405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.707604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.707615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.707788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.707798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.707970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.707980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.708185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.708196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.708327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.708338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.708628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.708638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.708846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.708856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.709033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.709044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.709291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.709302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.709557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.709567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.709822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.709832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.709995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.710006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.710259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.710270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.710506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.710517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.710754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.710765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.711031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.711040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.711296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.711306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.711548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.711558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.711817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.711827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.712079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.712089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.712367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.712377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.712545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.712555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.712721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.712731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.713028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.713039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.713285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.713295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.713550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.713560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.713741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.713751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.713915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.713925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.714106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.714116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.714397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.714410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.714642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.714652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.714816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.714826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.714990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.715001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.715231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.715242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.715492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.715502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.715764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.715774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.715951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.715962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.716088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.716099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.716378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.716389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.716665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.716675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.716880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.716890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.717119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.717130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.717297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.717308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.717544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.717554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.717816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.717826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.718009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.718019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.718273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.718284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.718460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.718471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.718612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.718621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.718824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.718834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.719005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.719015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.719203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.719214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.719409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.719419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.719663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.719673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.719923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.719933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.720130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.720141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.720264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.720274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.720509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.720519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.720697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.720708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.720964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.720974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.721141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.721153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.721354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.721364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.721530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.721540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.721711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.721721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.721973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.721984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.722248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.722259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.722557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.722567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.722837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.722847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.723108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.723119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.723375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.723387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.723499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.723509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.723674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.723685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.723945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.723955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.724208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.724219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.724459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.724469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.724726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.724736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.724966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.724976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.725102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.725113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.725321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.725331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.725534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.725544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.725708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.725719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.725882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.300 [2024-10-06 11:30:24.725893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.300 qpair failed and we were unable to recover it. 00:35:27.300 [2024-10-06 11:30:24.726148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.726158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.726337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.726348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.726601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.726612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.726724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.726734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.726917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.726928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.727091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.727101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.727279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.727289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.727469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.727479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.727642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.727653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.727769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.727779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.728041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.728052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.728351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.728362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.728614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.728624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.728904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.728915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.729194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.729205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.729453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.729463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.729696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.729706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.729958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.729969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.730169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.730180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.730414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.730424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.730619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.730629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.730809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.730819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.730996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.731006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.731211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.731222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.731454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.731464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.731650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.731660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.731771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.731781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.732013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.732025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.732203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.732214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.732326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.732336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.732575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.732585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.732762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.732772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.733004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.733013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.733211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.733222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.733480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.733490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.733609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.733619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.733901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.733911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.734138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.734148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.734413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.734423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.734660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.734670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.734787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.734798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.734974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.734985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.735186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.735197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.735449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.735459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.735666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.735676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.735771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.735781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.735945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.735956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.736132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.736142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.736317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.736327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.736584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.736594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.736852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.736862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.737039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.737049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.737244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.737255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.737509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.737519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.737777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.737787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.737950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.737961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.738216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.738227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.738350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.738360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.738537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.738548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.738803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.738813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.739071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.739082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.739260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.739270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.739400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.739410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.739574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.739584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.739842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.739852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.740098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.740109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.740231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.740241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.740413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.740426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.740631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.740641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.740807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.740817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.740998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.741009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.741239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.741250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.741529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.741539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.741780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.741791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.742023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.742033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.742275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.742286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.742451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.742462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.742719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.742729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.742985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-10-06 11:30:24.742995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.301 qpair failed and we were unable to recover it. 00:35:27.301 [2024-10-06 11:30:24.743238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.743249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.743502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.743513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.743693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.743704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.743937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.743947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.744115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.744125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.744291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.744301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.744502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.744512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.744766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.744776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.745026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.745037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.745280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.745291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.745492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.745502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.745759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.745769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.746050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.746064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.746187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.746198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.746450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.746461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.746708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.746719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.746972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.746982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.747216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.747226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.747460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.747470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.747746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.747756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.748002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.748012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.748206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.748217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.748381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.748392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.748669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.748679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.748921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.748931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.749154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.749165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.749326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.749336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.749583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.749593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.749762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.749774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.750026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.750037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.750206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.750217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.750381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.750391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.750583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.750593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.750740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.750750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.751004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.751014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.751245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.751256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.751433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.751443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.751609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.751619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.751740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.751750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.752007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.752017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.752273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.752284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.752522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.752532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.752700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.752710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.752897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.752907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.753202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.753213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.753389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.753399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.753585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.753595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.753856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.753865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.754045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.754056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.754317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.754328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.754581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.754590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.754831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.754842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.755072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.755083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.755376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.755387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.755574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.755585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.755782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.755792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.755974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.755984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.756146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.756157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.756336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.756347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.756626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.756636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.756928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.756938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.757049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.757069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.757322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.757332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.757538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.757548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.757789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.757798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.757983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.757993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.758188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.758198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.758451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.758461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.758643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.758655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.758847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.758857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.759114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.759125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.759318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.759327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.759560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.759571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.759703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.759713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.759944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.759954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.760142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.760152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.760329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.760339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.760545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.760555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.760744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.760754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.761005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.761015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.761217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.761228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.761432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.761442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.761734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.761745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.761975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-10-06 11:30:24.761985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.302 qpair failed and we were unable to recover it. 00:35:27.302 [2024-10-06 11:30:24.762224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.762234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.762466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.762476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.762679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.762689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.762948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.762958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.763140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.763151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.763281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.763292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.763491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.763502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.763753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.763763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.763878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.763888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.764148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.764159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.764420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.764451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.764700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.764749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.765069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.765086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.765363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.765395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.765651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.765682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.766010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.766042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.766314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.766345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.766645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.766676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.766855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.766886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.767141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.767175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.767500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.767531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.767843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.767874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.768125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.768162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.768471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.768502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.768800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.768843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.769069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.769085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.769378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.769413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.769721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.769753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.769965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.769974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.770166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.770198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.770480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.770512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.770818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.770850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.771071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.771103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.771313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.771323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.771529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.771560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.771800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.771831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.772127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.772160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.772461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.772492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.772819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.772851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.773147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.773157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.773357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.773367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.773655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.773686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.774011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.774043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.774354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.774387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.774677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.774708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.774959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.774991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.775326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.775358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.775615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.775647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.775926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.775957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.776297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.776329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.776630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.776662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.776957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.776988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.777170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.777180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.777436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.777467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.777757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.777789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.777997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.778007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.778267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.778277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.778543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.778576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.778794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.778825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.779114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.779124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.779304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.779314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.779573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.779604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.779838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.779869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.780226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.780259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.780536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.780577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.780776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.780786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.780990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.780999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.781285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.781295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.781559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.781591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.781802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.781834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.782115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.782147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.782466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.782498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.782814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.782846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.783124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.783162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.783465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.783497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.783731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.783763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.784036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.784080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.784307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.784338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.784571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.784603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.784848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-10-06 11:30:24.784880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.303 qpair failed and we were unable to recover it. 00:35:27.303 [2024-10-06 11:30:24.785096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.785106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.785359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.785391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.785614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.785645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.785968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.785998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.786226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.786260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.786569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.786601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.786832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.786863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.787162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.787195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.787423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.787454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.787601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.787632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.787848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.787879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.788121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.788132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.788366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.788375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.788576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.788586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.788781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.788791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.788967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.788976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.789240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.789273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.789508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.789540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.789844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.789875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.790172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.790205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.790508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.790540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.790865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.790896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.791205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.791238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.791524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.791556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.791859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.791889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.792188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.792221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.792522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.792554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.792868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.792900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.793126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.793136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.793249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.793259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.793528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.793560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.793799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.793831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.794110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.794120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.794367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.794395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.794705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.794737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.795037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.795077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.795313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.795345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.795688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.795720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.796003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.796036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.796264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.796274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.796508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.796519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.796696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.796706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.796951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.796981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.797174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.797208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.797486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.797517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.797797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.797828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.798076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.798109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.798458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.798490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.798793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.798825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.799037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.799077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.799372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.799405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.799702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.799739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.799974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.800006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.800350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.800383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.800651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.800682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.801006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.801038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.801227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.801259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.801538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.801569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.801747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.801780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.802104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.802137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.802419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.802450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.802735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.802767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.803078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.803111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.803324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.803356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.803646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.803678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.803905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.803938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.804169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.804222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.804447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.804479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.804708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.804740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.805041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.805051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.805312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.805349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.805691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.805722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.806025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.806056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.806304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.806335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.806649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.806681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.807008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.807039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.807299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.807331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.807564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.807595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.807815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.807847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.808153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.808186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.808352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.808383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.808661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.304 [2024-10-06 11:30:24.808692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.304 qpair failed and we were unable to recover it. 00:35:27.304 [2024-10-06 11:30:24.808995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.809027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.809347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.809357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.809637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.809647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.809811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.809821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.809994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.810004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.810151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.810183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.810399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.810431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.810663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.810696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.810894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.810904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.811022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.811068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.811301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.811332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.811618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.811649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.811953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.811984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.812224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.812257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.812416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.812447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.812759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.812790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.813082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.813114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.813360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.813392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.813683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.813714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.814015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.814025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.814195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.814206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.814419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.814453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.814735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.814766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.815021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.815053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.815225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.815236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.815494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.815525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.815774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.815805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.815999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.816009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.816139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.816149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.816389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.816421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.816665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.816696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.816931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.816962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.817320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.817354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.817638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.817670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.817955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.817988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.818317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.818349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.818667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.818698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.819017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.819049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.819376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.819408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.819720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.819752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.819979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.820010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.820182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.820213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.820387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.820420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.820666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.820698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.820909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.820919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.821127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.821161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.821404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.821435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.821737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.821768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.821963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.821972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.822210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.822222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.822512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.822544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.822882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.822914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.823217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.823249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.823548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.823580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.823817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.823849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.824104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.824136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.824436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.824468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.824629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.824661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.824895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.824927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.825158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.825168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.825427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.825437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.825691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.825701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.825949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.825978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.826168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.826201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.826487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.826519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.826825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.826857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.827164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.827197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.827497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.827529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.827833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.827864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.828124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.828135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.828321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.828331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.828521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.828552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.828800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.828831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.829052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.829073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.829338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.829369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.829658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.829690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.829928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.305 [2024-10-06 11:30:24.829938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.305 qpair failed and we were unable to recover it. 00:35:27.305 [2024-10-06 11:30:24.830093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.830126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.830345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.830378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.830686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.830716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.831011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.831042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.831409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.831440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.831771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.831803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.832108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.832142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2272113 Killed "${NVMF_APP[@]}" "$@" 00:35:27.306 [2024-10-06 11:30:24.832389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.832423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.832696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.832728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.832963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.832996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:27.306 [2024-10-06 11:30:24.833255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.833267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.833456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:27.306 [2024-10-06 11:30:24.833470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.833680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.833691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:27.306 [2024-10-06 11:30:24.833970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.834003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:27.306 [2024-10-06 11:30:24.834284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.834318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:27.306 [2024-10-06 11:30:24.834641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.834675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.834984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.835016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.835386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.835420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.835705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.835737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.836045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.836056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.836324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.836335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.836483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.836515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.836752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.836784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.837031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.837075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.837404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.837437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.837603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.837634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.837861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.837872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.838009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.838019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.838258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.838270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.838406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.838419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.838675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.838687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.838871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.838881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.839106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.839138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.839310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.839342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.839630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.839661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.839965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.839996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.840239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.840279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.840567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.840599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.840927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.840958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.841267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.841300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2272804 00:35:27.306 [2024-10-06 11:30:24.841605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.841640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2272804 00:35:27.306 [2024-10-06 11:30:24.841905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.841937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 [2024-10-06 11:30:24.842175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.842188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2272804 ']' 00:35:27.306 [2024-10-06 11:30:24.842316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.306 [2024-10-06 11:30:24.842328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.306 qpair failed and we were unable to recover it. 00:35:27.306 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.585 [2024-10-06 11:30:24.842601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.842637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:27.585 [2024-10-06 11:30:24.842867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.842902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.585 [2024-10-06 11:30:24.843240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.843278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:27.585 [2024-10-06 11:30:24.843566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.843599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:27.585 [2024-10-06 11:30:24.843790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.843824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.844071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.844083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.844622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.844641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.844902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.844913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.845159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.845173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.845322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.845335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.845514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.845525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.845807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.845819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.846065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.846077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.846226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.846237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.846523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.846538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.846751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.846761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.846985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.846997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.847251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.847263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.847499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.847510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.847645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.847657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.847920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.847931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.848192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.848203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.848461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.848471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.848644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.848654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.848896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.848908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.849040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.849051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.849301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.849313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.849501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.849511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.849808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.849819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.850007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.585 [2024-10-06 11:30:24.850018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.585 qpair failed and we were unable to recover it. 00:35:27.585 [2024-10-06 11:30:24.850234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.850245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.850380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.850391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.850630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.850642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.850901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.850911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.851177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.851189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.851445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.851455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.851646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.851656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.851954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.851965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.852238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.852249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.852504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.852515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.852797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.852808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.852961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.852973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.853176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.853188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.853395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.853406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.853586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.853597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.853726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.853736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.853993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.854003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.854178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.854189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.854309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.854320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.854513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.854524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.854698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.854710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.854889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.854901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.855143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.855155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.855360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.855371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.855553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.855567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.855840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.855852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.856071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.856082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.856346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.856356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.856543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.856554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.856760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.856771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.856953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.856963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.857155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.857168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.857378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.857390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.857608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.857619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.857906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.857917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.858116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.858130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.858369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.586 [2024-10-06 11:30:24.858380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.586 qpair failed and we were unable to recover it. 00:35:27.586 [2024-10-06 11:30:24.858573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.858584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.858827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.858838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.859027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.859038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.859243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.859254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.859402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.859414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.859604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.859615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.859871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.859881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.860075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.860086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.860273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.860285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.860407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.860418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.860621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.860632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.860881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.860892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.861179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.861191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.861373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.861385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.861596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.861607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.861825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.861836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.862088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.862099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.862292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.862302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.862478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.862488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.862695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.862705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.862883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.862893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.863153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.863164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.863282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.863293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.863529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.863540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.863652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.863662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.863940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.863951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.864142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.864153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.864358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.864374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.864636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.864646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.864777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.864787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.865031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.865041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.865245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.865256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.865446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.865457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.865711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.865721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.865913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.865924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.866113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.866123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.587 qpair failed and we were unable to recover it. 00:35:27.587 [2024-10-06 11:30:24.866313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.587 [2024-10-06 11:30:24.866324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.866493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.866503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.866637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.866647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.866832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.866842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.866957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.866968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.867055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.867074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.867190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.867201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.867309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.867319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.867517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.867528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.867649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.867659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.867836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.867847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.867965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.867975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.868143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.868154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.868335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.868345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.868531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.868542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.868672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.868682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.868855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.868866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.868982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.868993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.869174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.869185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.869445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.869455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.869565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.869575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.869754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.869764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.870002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.870013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.870181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.870192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.870301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.870311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.870547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.870558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.870632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.870641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.870876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.870886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.871003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.871014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.871220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.871231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.871427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.871438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.871621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.871634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.871808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.871819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.871940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.871950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.872216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.872227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.872411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.588 [2024-10-06 11:30:24.872422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.588 qpair failed and we were unable to recover it. 00:35:27.588 [2024-10-06 11:30:24.872585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.872595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.872721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.872731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.872853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.872864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.873102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.873112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.873285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.873295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.873409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.873419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.873685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.873695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.873797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.873807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.873911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.873920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.874113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.874123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.874380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.874391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.874576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.874587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.874823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.874833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.874952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.874962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.875166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.875177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.875347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.875357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.875651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.875662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.875795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.875806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.875981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.875991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.876179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.876190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.876310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.876320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.876524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.876534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.876722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.876732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.876925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.876935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.877123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.877134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.877302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.877313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.877496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.877506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.877695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.877705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.877928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.877939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.878056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.878071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.878273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.589 [2024-10-06 11:30:24.878284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.589 qpair failed and we were unable to recover it. 00:35:27.589 [2024-10-06 11:30:24.878473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.878483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.878664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.878675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.878877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.878888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.879056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.879076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.879186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.879199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.879311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.879320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.879497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.879508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.879750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.879759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.879925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.879935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.880111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.880121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.880308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.880318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.880496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.880507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.880688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.880699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.880867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.880877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.880993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.881003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.881186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.881197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.881296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.881306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.881538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.881549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.881687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.881698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.881806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.881816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.882079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.882090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.882279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.882290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.882472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.882482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.882659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.882669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.882811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.882821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.882993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.883003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.883184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.883195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.883410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.883420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.883547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.883558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.883825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.883836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.884050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.884067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.884277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.884287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.884418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.884428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.884606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.884616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.884741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.884751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.884876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.884887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.590 qpair failed and we were unable to recover it. 00:35:27.590 [2024-10-06 11:30:24.885069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.590 [2024-10-06 11:30:24.885080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.885255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.885265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.885509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.885520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.885692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.885702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.885785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.885795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.885974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.885984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.886074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.886085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.886252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.886262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.886426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.886439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.886610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.886620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.886737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.886747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.886948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.886959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.887133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.887144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.887380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.887390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.887574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.887584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.887767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.887778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.887908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.887918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.888096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.888107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.888289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.888300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.888468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.888478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.888642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.888652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.888920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.888930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.889040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.889051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.889292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.889303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.889541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.889551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.889675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.889685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.889856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.889876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.890110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.890121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.890386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.890396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.890509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.890520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.890800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.890811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.890988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.890999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.891099] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:35:27.591 [2024-10-06 11:30:24.891150] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:27.591 [2024-10-06 11:30:24.891168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.891181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.891297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.891306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.891512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.891523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.891641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.591 [2024-10-06 11:30:24.891652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.591 qpair failed and we were unable to recover it. 00:35:27.591 [2024-10-06 11:30:24.891775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.891786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.891992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.892003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.892131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.892142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.892325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.892337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.892575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.892586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.892721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.892732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.892915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.892926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.893165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.893177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.893280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.893291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.893475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.893487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.893653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.893665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.893798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.893812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.894051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.894068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.894249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.894260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.894529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.894541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.894717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.894728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.894911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.894922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.895157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.895169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.895425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.895437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.895567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.895578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.895648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.895658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.895829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.895839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.896025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.896035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.896213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.896224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.896348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.896358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.896449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.896459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.896593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.896603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.896781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.896791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.896972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.896982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.897241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.897252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.897430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.897441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.897627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.897638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.897746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.897756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.897826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.897836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.898030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.898040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.898134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.592 [2024-10-06 11:30:24.898145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.592 qpair failed and we were unable to recover it. 00:35:27.592 [2024-10-06 11:30:24.898249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.898260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.898382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.898393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.898583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.898609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.898729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.898739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.898987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.898998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.899108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.899119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.899303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.899314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.899482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.899493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.899678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.899689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.899866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.899877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.900083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.900094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.900264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.900275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.900442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.900452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.900623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.900634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.900892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.900901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.901083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.901098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.901298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.901309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.901503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.901514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.901700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.901710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.901836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.901846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.902107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.902118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.902215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.902225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.902412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.902423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.902504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.902520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.902704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.902716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.902888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.902906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.903096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.903107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.903228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.903239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.903360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.903370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.903470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.903480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.903644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.903654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.903890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.903901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.904135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.904146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.904347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.904358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.904536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.904548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.904710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.904721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.593 [2024-10-06 11:30:24.904889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.593 [2024-10-06 11:30:24.904899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.593 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.905065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.905076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.905334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.905345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.905454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.905465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.905682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.905693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.905875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.905885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.905955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.905966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.906166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.906177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.906356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.906367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.906496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.906506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.906636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.906647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.906902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.906912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.907022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.907032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.907222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.907233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.907398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.907409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.907663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.907673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.907873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.907884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.908049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.908064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.908244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.908255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.908438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.908451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.908631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.908642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.908906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.908916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.909226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.909237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.909486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.909497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.909610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.909621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.909816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.909827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.909991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.910001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.910194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.910205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.910348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.910358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.910543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.910553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.910671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.910681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.910850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.910860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.910995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.594 [2024-10-06 11:30:24.911005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.594 qpair failed and we were unable to recover it. 00:35:27.594 [2024-10-06 11:30:24.911116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.911128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.911228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.911238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.911432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.911443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.911619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.911630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.911765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.911775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.911877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.911888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.911999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.912010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.912275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.912286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.912400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.912411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.912649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.912659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.912841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.912852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.913033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.913045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.913233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.913244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.913344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.913355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.913540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.913550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.913805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.913815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.914078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.914088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.914172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.914182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.914370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.914380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.914551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.914561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.914815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.914826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.914990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.915000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.915176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.915188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.915321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.915332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.915521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.915531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.915669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.915680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.915864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.915877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.915995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.916005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.916190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.916201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.916374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.916385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.916482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.916492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.916745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.916755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.916837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.916847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.917013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.917024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.917189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.917200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.917379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.917390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.595 [2024-10-06 11:30:24.917484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.595 [2024-10-06 11:30:24.917494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.595 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.917726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.917736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.917911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.917921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.918102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.918113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.918350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.918361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.918529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.918540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.918643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.918654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.918822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.918833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.919016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.919026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.919200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.919213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.919411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.919422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.919622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.919631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.919804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.919816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.920070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.920080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.920280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.920290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.920418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.920428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.920547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.920557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.920674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.920684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.920867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.920877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.921028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.921039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.921278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.921289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.921456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.921466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.921645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.921656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.921761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.921771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.922026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.922036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.922144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.922155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.922322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.922332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.922460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.922470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.922663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.922674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.922910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.922921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.923035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.923047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.923199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.923234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81c4000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.923388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.923427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.923669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.923705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.923899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.923912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.924169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.924180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.924353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.924363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.596 qpair failed and we were unable to recover it. 00:35:27.596 [2024-10-06 11:30:24.924484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.596 [2024-10-06 11:30:24.924495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.924668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.924679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.924885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.924895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.925067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.925077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.925258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.925269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.925468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.925478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.925600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.925610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.925788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.925799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.926061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.926072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.926179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.926191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.926371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.926381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.926620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.926630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.926837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.926847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.926947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.926957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.927119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.927130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.927240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.927250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.927414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.927424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.927551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.927561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.927758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.927768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.927870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.927880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.928154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.928164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.928327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.928337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.928463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.928473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.928595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.928604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.928729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.928739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.928862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.928872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.929038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.929048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.929230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.929241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.929497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.929508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.929767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.929777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.929937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.929948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.930198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.930209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.930342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.930352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.930534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.930546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.930674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.930684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.930857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.930867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.931044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.931055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.597 qpair failed and we were unable to recover it. 00:35:27.597 [2024-10-06 11:30:24.931189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.597 [2024-10-06 11:30:24.931199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.931386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.931397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.931583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.931593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.931756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.931766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.931847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.931857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.931971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.931981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.932144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.932155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.932274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.932284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.932535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.932545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.932787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.932797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.932933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.932944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.933075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.933085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.933262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.933273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.933478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.933488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.933613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.933623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.933856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.933867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.934065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.934076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.934195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.934205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.934309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.934320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.934496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.934506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.934739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.934749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.934930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.934940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.935078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.935089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.935269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.935279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.935394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.935404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.935599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.935609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.935730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.935739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.935907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.935917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.936046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.936056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.936246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.936257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.936366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.936376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.936539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.936550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.936713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.936724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.936832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.936843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.937013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.937023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.937199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.937210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.937339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.598 [2024-10-06 11:30:24.937354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.598 qpair failed and we were unable to recover it. 00:35:27.598 [2024-10-06 11:30:24.937589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.937599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.937859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.937869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.938126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.938137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.938363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.938373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.938624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.938634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.938833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.938843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.939028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.939038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.939323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.939334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.939513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.939523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.939768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.939779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.939955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.939965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.940151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.940162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.940352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.940362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.940613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.940623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.940816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.940826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.941003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.941014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.941273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.941284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.941516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.941527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.941808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.941818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.941946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.941956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.942222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.942232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.942477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.942487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.942653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.942663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.942894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.942904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.943083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.943094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.943346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.943356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.943590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.943600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.943768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.943778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.943974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.943984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.944239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.944250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.944506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.944516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.944649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.944659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.599 [2024-10-06 11:30:24.944838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.599 [2024-10-06 11:30:24.944848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.599 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.945082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.945093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.945256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.945266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.945467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.945477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.945734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.945745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.945979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.945989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.946156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.946166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.946349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.946361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.946631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.946641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.946764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.946774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.946908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.946919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.947096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.947106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.947356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.947367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.947560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.947570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.947768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.947779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.948028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.948038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.948219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.948230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.948434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.948444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.948577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.948587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.948760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.948770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.948942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.948953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.949234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.949245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.949380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.949391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.949641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.949650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.949832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.949842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.950026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.950036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.950351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.950362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.950621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.950631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.950915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.950925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.951158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.951168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.951348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.951358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.951614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.951624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.951690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:27.600 [2024-10-06 11:30:24.951853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.951864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.952098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.952109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.952314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.952324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.952576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.952586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.952750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.600 [2024-10-06 11:30:24.952760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.600 qpair failed and we were unable to recover it. 00:35:27.600 [2024-10-06 11:30:24.953013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.953023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.953283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.953294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.953533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.953544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.953725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.953737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.953849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.953859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.954093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.954104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.954295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.954305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.954468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.954479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.954663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.954673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.954858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.954868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.955036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.955046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.955258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.955269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.955454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.955464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.955644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.955654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.955823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.955834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.955967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.955978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.956239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.956250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.956438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.956449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.956585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.956596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.956704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.956714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.956971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.956982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.957149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.957161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.957396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.957407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.957528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.957541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.957741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.957752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.958011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.958022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.958318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.958331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.958524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.958535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.958720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.958730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.958859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.958869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.959042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.959054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.959246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.959258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.959448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.959459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.959714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.959725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.959925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.959935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.960145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.960157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.601 [2024-10-06 11:30:24.960398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.601 [2024-10-06 11:30:24.960410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.601 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.960659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.960671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.960854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.960865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.961062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.961074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.961333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.961344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.961510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.961520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.961697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.961708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.961903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.961913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.962147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.962158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.962416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.962426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.962613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.962623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.962812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.962822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.963024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.963034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.963197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.963208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.963465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.963476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.963732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.963742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.963952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.963962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.964246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.964256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.964517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.964528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.964723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.964733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.964917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.964927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.965130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.965141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.965323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.965333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.965590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.965600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.965860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.965870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.966126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.966137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.966373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.966383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.966569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.966581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.966811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.966821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.967080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.967091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.967327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.967337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.967594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.967604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.967860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.967870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.968111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.968122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.968315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.968325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.968582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.968591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.968774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.968784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.602 qpair failed and we were unable to recover it. 00:35:27.602 [2024-10-06 11:30:24.968993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.602 [2024-10-06 11:30:24.969003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.969180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.969191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.969450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.969461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.969691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.969702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.969879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.969890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.970077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.970089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.970348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.970359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.970540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.970553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.970812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.970824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.971062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.971075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.971341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.971354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.971558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.971570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.971803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.971815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.972071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.972084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.972294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.972307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.972573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.972585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.972826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.972837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.973085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.973098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.973275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.973286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.973470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.973481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.973659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.973670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.973898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.973909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.974150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.974161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.974354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.974365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.974626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.974638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.974901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.974913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.975090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.975101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.975290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.975301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.975505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.975516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.975797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.975809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.975958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.975969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.976091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.976102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.976289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.976300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.976565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.976576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.976693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.976704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.603 qpair failed and we were unable to recover it. 00:35:27.603 [2024-10-06 11:30:24.976951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.603 [2024-10-06 11:30:24.976963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.977237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.977249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.977539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.977551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.977682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.977693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.977947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.977958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.978160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.978171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.978458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.978469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.978646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.978657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.978925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.978936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.979222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.979234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.979488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.979499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.979712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.979724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.979991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.980002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.980135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.980146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.980404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.980415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.980593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.980603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.980802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.980813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.980944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.980954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.981209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.981219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.981403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.981413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.981595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.981607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.981863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.981874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.982068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.982084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.982350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.982360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.982599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.982609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.982806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.982816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.983075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.983086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.983288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.983298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.983481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.983492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.983604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.983613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.983845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.983856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.984086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.984096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.984228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.984238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.984425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.984435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.604 qpair failed and we were unable to recover it. 00:35:27.604 [2024-10-06 11:30:24.984737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.604 [2024-10-06 11:30:24.984748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.984940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.984950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.985209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.985221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.985427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.985437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.985702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.985714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.985872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.985882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.986151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.986162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.986448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.986458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.986667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.986679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.986938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.986949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.987141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.987152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.987286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.987297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.987457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.987467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.987661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.987673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.987908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.987919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.988209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.988220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.988422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.988433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.988682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.988692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.988941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.988951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.989187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.989198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.989455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.989468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.989727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.989739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.989977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.989989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.990126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.990138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.990336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.990348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.990546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.990557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.990782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.990792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.990998] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:27.605 [2024-10-06 11:30:24.991033] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:27.605 [2024-10-06 11:30:24.991040] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:27.605 [2024-10-06 11:30:24.991050] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:27.605 [2024-10-06 11:30:24.991049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.991057] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:27.605 [2024-10-06 11:30:24.991066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.991317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.991329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.991515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.991528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.991711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.991722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.991913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.991923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.992100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.992111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.992237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.992247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.992375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.605 [2024-10-06 11:30:24.992385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.605 qpair failed and we were unable to recover it. 00:35:27.605 [2024-10-06 11:30:24.992620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.992630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.992591] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:35:27.606 [2024-10-06 11:30:24.992699] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:35:27.606 [2024-10-06 11:30:24.992808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.992804] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:35:27.606 [2024-10-06 11:30:24.992823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.992804] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:35:27.606 [2024-10-06 11:30:24.993082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.993093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.993195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.993207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.993401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.993412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.993546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.993557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.993741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.993752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.993946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.993956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.994051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.994066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.994250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.994261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.994433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.994443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.994722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.994733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.994942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.994952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.995185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.995196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.995428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.995441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.995732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.995743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.995971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.995984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.996219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.996231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.996477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.996487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.996742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.996752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.996986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.996997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.997281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.997292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.997475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.997486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.997766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.997776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.998014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.998024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.998212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.998222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.998360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.998371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.998540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.998550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.998805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.998816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.998983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.998994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.999274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.999286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.999540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.999551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:24.999820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:24.999831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:25.000020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:25.000030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:25.000257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:25.000268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.606 [2024-10-06 11:30:25.000436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.606 [2024-10-06 11:30:25.000447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.606 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.000634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.000645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.000872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.000883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.001078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.001090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.001276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.001286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.001534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.001544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.001809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.001819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.002004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.002015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.002198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.002212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.002400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.002411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.002665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.002676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.002931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.002941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.003206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.003217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.003399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.003410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.003592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.003602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.003862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.003872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.004062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.004074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.004292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.004302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.004564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.004575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.004761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.004771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.005027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.005039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.005351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.005362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.005543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.005554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.005669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.005679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.005926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.005937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.006221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.006233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.006344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.006356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.006529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.006540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.006823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.006835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.007002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.007013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.007191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.007203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.007388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.007398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.007578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.007588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.007712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.007723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.007958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.007970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.008183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.008219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.008536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.008566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.607 qpair failed and we were unable to recover it. 00:35:27.607 [2024-10-06 11:30:25.008891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.607 [2024-10-06 11:30:25.008909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.009185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.009204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.009463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.009480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.009755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.009773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.010017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.010033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.010319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.010337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.010540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.010557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.010689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.010704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.010975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.010992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.011208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.011226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.011492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.011509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.011698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.011717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.011959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.011975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.012252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.012270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.012469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.012485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.012734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.012751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.012950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.012965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.013153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.013168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.013435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.013452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.013641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.013656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.013953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.013970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.014218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.014236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.014507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.014526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.014788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.014805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.015081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.015099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.015348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.015365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.015542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.015558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.015847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.015864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.016140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.016157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.016415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.016433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.016626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.016642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.016863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.016878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.017089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.608 [2024-10-06 11:30:25.017106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.608 qpair failed and we were unable to recover it. 00:35:27.608 [2024-10-06 11:30:25.017372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.017390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.017597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.017612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.017800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.017816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.018017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.018033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.018232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.018249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.018547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.018568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.018836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.018853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.019072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.019088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.019335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.019351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.019495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.019511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.019649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.019665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.019855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.019870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.020153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.020170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.020280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.020296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.020442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.020458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.020573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.020590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.020777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.020793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.021064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.021081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.021306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.021322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.021572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.021588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.021832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.021849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.022116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.022133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.022326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.022342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.022634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.022651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.022866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.022882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.023017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.023033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.023302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.023318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.023514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.023530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.023721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.023737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.024008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.024024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.024300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.024317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.024579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.024595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.024863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.024883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.025029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.025044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.025323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.025339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.025620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.025636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.025897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.025913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.609 qpair failed and we were unable to recover it. 00:35:27.609 [2024-10-06 11:30:25.026202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.609 [2024-10-06 11:30:25.026219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.026493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.026510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.026804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.026820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.027080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.027097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.027373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.027389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.027642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.027659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.027848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.027864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.028158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.028176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.028448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.028465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.028665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.028681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.028952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.028970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.029169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.029186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.029383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.029399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.029661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.029677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.029886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.029901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.030168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.030185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.030394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.030409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.030615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.030631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.030898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.030913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.031202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.031230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.031543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.031554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.031737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.031748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.032006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.032020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.032204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.032215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.032419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.032429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.032543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.032553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.032738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.032749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.032869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.032879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.033131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.033142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.033421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.033431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.033707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.033717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.033973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.033983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.034258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.034270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.034450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.034460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.034739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.034749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.035029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.035039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.035291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.035302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.610 [2024-10-06 11:30:25.035558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.610 [2024-10-06 11:30:25.035568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.610 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.035826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.035836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.035946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.035956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.036186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.036197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.036374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.036384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.036560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.036570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.036834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.036845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.037110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.037121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.037398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.037408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.037583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.037593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.037780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.037791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.038051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.038065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.038331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.038351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.038562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.038579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.038873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.038890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.039152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.039170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.039423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.039439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.039691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.039707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.039830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.039846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.040047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.040068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.040259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.040275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.040479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.040495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.040748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.040765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.041008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.041024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.041215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.041231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.041438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.041454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.041636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.041652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.041883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.041900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.042195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.042211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.042483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.042499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.042771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.042788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.042991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.043008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.043260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.043277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.043468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.043484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.043680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.043696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.043960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.043976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.044167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.044184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.044323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.611 [2024-10-06 11:30:25.044339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.611 qpair failed and we were unable to recover it. 00:35:27.611 [2024-10-06 11:30:25.044515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.044531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.044784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.044805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.045090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.045107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.045302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.045320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.045564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.045581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.045759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.045775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.046055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.046077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.046345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.046362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.046558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.046575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.046753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.046769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.046962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.046978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.047224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.047242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.047454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.047470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.047764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.047781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.048029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.048046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.048186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.048202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.048469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.048485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.048756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.048774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.048967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.048983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.049261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.049278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.049528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.049543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.049786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.049801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.049975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.049990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.050257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.050273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.050511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.050526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.050772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.050788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.051033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.051049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.051307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.051324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.051573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.051588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.051827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.051843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.052031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.052046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.052265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.052281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.052409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.052424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.052681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.052697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.052915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.052930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.053063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.053079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.053341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.053357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.612 qpair failed and we were unable to recover it. 00:35:27.612 [2024-10-06 11:30:25.053637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.612 [2024-10-06 11:30:25.053653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.053922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.053937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.054201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.054218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.054343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.054358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.054626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.054641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.054897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.054919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.055190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.055207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.055401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.055416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.055636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.055651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.055863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.055879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.056139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.056155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.056401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.056416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.056629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.056645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.056773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.056789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.057057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.057077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.057271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.057287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.057564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.057579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.057852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.057867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.058135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.058151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.058360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.058376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.058572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.058587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.058834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.058849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.059116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.059132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.059422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.059437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.059680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.059696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.059904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.059920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.060215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.060232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.060479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.060495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.060765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.060780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.061011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.061026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.061273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.061289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.061469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.061485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.613 [2024-10-06 11:30:25.061676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.613 [2024-10-06 11:30:25.061695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.613 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.061978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.061993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.062192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.062208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.062401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.062417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.062607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.062622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.062760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.062775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.062949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.062965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.063111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.063127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.063371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.063387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.063681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.063697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.063808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.063823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.064094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.064110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.064298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.064313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.064518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.064534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.064675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.064691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.064863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.064879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.065124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.065140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.065407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.065422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.065622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.065637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.065810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.065825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.066071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.066087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.066305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.066320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.066566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.066581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.066845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.066861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.067093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.067109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.067233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.067250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.067509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.067526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.067818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.067836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.068108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.068126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.068267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.068284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.068462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.068477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.068666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.068682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.068811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.068826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.069021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.069036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.069150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.069166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.069363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.069379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.069671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.069686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.069926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.614 [2024-10-06 11:30:25.069942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.614 qpair failed and we were unable to recover it. 00:35:27.614 [2024-10-06 11:30:25.070219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.070235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.070491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.070507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.070642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.070657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.070935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.070954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.071226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.071243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.071507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.071524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.071661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.071676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.071866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.071881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.072151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.072167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.072346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.072362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.072560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.072585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.072821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.072832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.073087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.073097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.073209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.073218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.073398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.073408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.073575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.073585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.073868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.073879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.074092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.074103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.074226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.074237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.074424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.074434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.074721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.074730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.074914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.074924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.075191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.075202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.075414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.075424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.075695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.075705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.075815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.075825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.076079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.076090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.076291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.076302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.076532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.076542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.076798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.076808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.077055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.077068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.077265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.077275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.077528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.077539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.077797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.077807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.077921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.077931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.078115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.078126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.078383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.615 [2024-10-06 11:30:25.078393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.615 qpair failed and we were unable to recover it. 00:35:27.615 [2024-10-06 11:30:25.078577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.078587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.078817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.078827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.079109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.079119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.079306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.079316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.079442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.079452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.079641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.079652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.079882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.079894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.080145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.080156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.080322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.080333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.080585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.080596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.080851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.080861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.081121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.081132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.081370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.081380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.081547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.081557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.081808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.081819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.082077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.082087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.082327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.082337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.082591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.082602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.082809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.082819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.083004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.083015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.083254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.083266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.083374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.083384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.083563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.083573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.083823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.083834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.084013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.084024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.084255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.084266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.084444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.084455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.084688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.084699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.084955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.084966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.085221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.085232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.085409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.085419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.085598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.085609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.085777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.085787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.085964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.085974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.086203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.086214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.086486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.086496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.616 [2024-10-06 11:30:25.086754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.616 [2024-10-06 11:30:25.086764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.616 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.086996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.087007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.087188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.087199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.087370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.087381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.087516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.087526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.087731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.087742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.087903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.087913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.088169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.088180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.088359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.088369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.088549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.088559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.088816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.088828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.088995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.089005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.089258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.089269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.089451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.089461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.089720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.089731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.089903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.089913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.090115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.090125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.090383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.090394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.090515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.090525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.090690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.090700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.090864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.090874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.091041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.091051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.091239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.091249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.091484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.091495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.091694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.091704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.091879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.091889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.092076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.092087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.092196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.092206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.092458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.092468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.092719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.092729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.092980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.092990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.093246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.093257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.093519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.093530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.093766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.093776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.093962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.093972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.094225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.094236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.094419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.094429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.617 [2024-10-06 11:30:25.094615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.617 [2024-10-06 11:30:25.094625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.617 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.094885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.094895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.095088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.095098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.095345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.095356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.095613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.095624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.095801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.095812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.095989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.096000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.096254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.096265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.096481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.096492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.096688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.096698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.096941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.096952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.097203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.097215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.097384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.097394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.097569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.097583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.097855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.097865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.098073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.098083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.098252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.098262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.098430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.098441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.098656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.098666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.098776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.098786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.099065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.099076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.099332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.099343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.099581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.099593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:27.618 [2024-10-06 11:30:25.099877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.099889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:35:27.618 [2024-10-06 11:30:25.100073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.100085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.100283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.100297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:27.618 [2024-10-06 11:30:25.100486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.100498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:27.618 [2024-10-06 11:30:25.100673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.100685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:27.618 [2024-10-06 11:30:25.100937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.100951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.101168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.101179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.101446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.618 [2024-10-06 11:30:25.101456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.618 qpair failed and we were unable to recover it. 00:35:27.618 [2024-10-06 11:30:25.101665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.101676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.101958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.101970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.102234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.102245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.102449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.102459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.102746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.102757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.102991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.103001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.103175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.103185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.103432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.103454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.103673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.103703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.103883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.103896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.104079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.104091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.104375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.104388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.104655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.104667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.104912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.104924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.105039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.105050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.105315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.105327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.105453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.105464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.105714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.105725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.105925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.105936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.106143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.106155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.106324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.106339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.106516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.106528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.106803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.106814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.107013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.107024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.107235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.107250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.107515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.107526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.107642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.107652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.107855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.107866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.108073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.108084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.108218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.108230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.108458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.108470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.108597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.108609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.108889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.108900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.109156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.109168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.109361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.109372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.109580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.109591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.109861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.619 [2024-10-06 11:30:25.109873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.619 qpair failed and we were unable to recover it. 00:35:27.619 [2024-10-06 11:30:25.109975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.109986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.110217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.110229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.110417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.110443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.110641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.110656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.110938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.110954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.111240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.111257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.111453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.111469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.111717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.111733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.111928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.111943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.112259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.112274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81d0000b90 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.112550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.112569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.112854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.112871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.113069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.113085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.113279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.113295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.113424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.113439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.113730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.113746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.114065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.114082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.114241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.114259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.114506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.114522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.114810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.114826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.115106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.115122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.115274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.115289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.115549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.115564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.115778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.115794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.115991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.116006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.116282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.116299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.116490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.116506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.116708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.116724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.116994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.117010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.117209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.117225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.117423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.117440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.117614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.117629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.117935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.117951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.118168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.118185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.118382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.118397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.118577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.118593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.118780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.620 [2024-10-06 11:30:25.118798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.620 qpair failed and we were unable to recover it. 00:35:27.620 [2024-10-06 11:30:25.119040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.119122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.119313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.119330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.119542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.119560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.119866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.119883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.120155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.120172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.120364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.120379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.120523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.120538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.120788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.120803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.121008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.121024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.121300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.121317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.121564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.121579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.121875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.121890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.122126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.122143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.122435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.122451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.122641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.122657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.122972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.122987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.123229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.123245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.123437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.123453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.123734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.123749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.123952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.123968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.124191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.124207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.124339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.124354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.124602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.124619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x686750 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.124837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.124850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.125009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.125020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.125160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.125171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.125360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.125371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.125491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.125504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.125622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.125633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.125895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.125906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.126091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.126102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.126219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.126230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.126396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.126407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.126587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.126597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.126761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.126771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.126932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.126942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.127054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.127069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.621 qpair failed and we were unable to recover it. 00:35:27.621 [2024-10-06 11:30:25.127306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.621 [2024-10-06 11:30:25.127317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.127485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.127495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.127615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.127625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.127815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.127826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.128000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.128010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.128292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.128303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.128410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.128419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.128543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.128554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.128808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.128818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.128986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.128996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.129110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.129121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.129291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.129301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.129511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.129520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.129728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.129738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.129913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.129923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.130193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.130203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.130317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.130327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.130441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.130451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.130552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.130563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.130736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.130746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.131007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.131017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.131236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.131247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.131416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.131428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.131549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.131559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.131812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.131824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.132054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.132068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.132308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.132318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.132587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.132599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.132798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.132809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.132987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.132997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.133181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.133194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.133331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.133340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.622 [2024-10-06 11:30:25.133487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.622 [2024-10-06 11:30:25.133497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.622 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.133776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.133787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.134043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.134053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.134221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.134232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.134362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.134371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.134537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.134547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.134749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.134759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.134875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.134885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.135153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.135164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.135396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.135407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.135641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.135651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.135975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.135985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.136169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.136181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:27.623 [2024-10-06 11:30:25.136346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.136358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.136472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.136482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:27.623 [2024-10-06 11:30:25.136806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.136820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.623 [2024-10-06 11:30:25.137072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.137085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.137251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.137262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:27.623 [2024-10-06 11:30:25.137382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.137393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.137508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.137518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.137720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.137730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.137937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.137948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.138128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.138139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.138405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.138417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.138592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.138602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.138788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.138797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.139070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.139081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.623 [2024-10-06 11:30:25.139297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.623 [2024-10-06 11:30:25.139308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.623 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.139565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.139576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.139785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.139795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.139985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.139996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.140179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.140190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.140373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.140383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.140516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.140527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.140639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.140649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.140762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.140773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.140981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.140991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.141163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.141174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.141283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.141293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.141494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.141504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.141611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.141621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.141812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.141822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.141955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.141965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.142207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.142218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.142349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.142359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.142545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.142555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.142855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.142865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.143031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.143042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.143340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.143351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.143555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.143566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.143839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.143849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.144087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.144098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.144352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.144363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.144497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.144506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.144685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.144695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.144927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.144938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.145205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.889 [2024-10-06 11:30:25.145215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.889 qpair failed and we were unable to recover it. 00:35:27.889 [2024-10-06 11:30:25.145327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.145338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.145600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.145611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.145739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.145749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.146010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.146020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.146268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.146280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.146523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.146534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.146721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.146733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.146990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.147001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.147260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.147271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.147502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.147513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.147694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.147705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.147963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.147974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.148190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.148201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.148412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.148422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.148542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.148553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.148673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.148684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.148931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.148943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.149219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.149230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.149436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.149446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.149570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.149580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.149773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.149785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.149959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.149970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.150232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.150244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.150474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.150485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.150755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.150766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.150958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.150969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.151224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.151236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.151356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.151367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.151612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.151623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.151800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.151811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.151927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.151937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.152220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.152232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.152352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.152362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.152553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.152564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.152734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.152745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.152978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.152991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.890 [2024-10-06 11:30:25.153183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.890 [2024-10-06 11:30:25.153194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.890 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.153365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.153377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.153475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.153485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.153682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.153694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.153971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.153983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.154269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.154282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.154538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.154549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.154680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.154690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.154927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.154937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.155122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.155133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.155266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.155279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.155409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.155420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.155580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.155590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.155783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.155793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.155966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.155976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.156234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.156245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.156361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.156371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.156575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.156585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 Malloc0 00:35:27.891 [2024-10-06 11:30:25.156854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.156867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.157106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.157117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.891 [2024-10-06 11:30:25.157295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.157306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:27.891 [2024-10-06 11:30:25.157489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.157500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.891 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:27.891 [2024-10-06 11:30:25.157665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.157675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.157906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.157917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.158095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.158105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.158339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.158350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.158535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.158544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.158736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.158746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.158998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.159008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.159249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.159260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.159440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.159450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.159628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.159638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.159919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.159928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.160122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.160132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.160254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.891 [2024-10-06 11:30:25.160264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.891 qpair failed and we were unable to recover it. 00:35:27.891 [2024-10-06 11:30:25.160393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.891 [2024-10-06 11:30:25.160430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.160440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.160719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.160729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.160925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.160934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.161131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.161142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.161383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.161393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.161584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.161594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.161863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.161873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.162103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.162114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.162297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.162307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.162562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.162572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.162677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.162686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.162941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.162951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.163210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.163220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.163456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.163466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.163645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.163655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.163759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.163768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.164044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.164054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.164223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.164233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.164408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.164418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.164599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.164609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.164870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.164881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.165072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.165083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.165374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.165384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.165520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.165530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.165736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.165746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.165928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.165937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.166192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.166202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.166306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.166315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.166489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.166499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.166736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.166746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.166991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.167001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.167163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.167173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.167424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.167435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.167647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.167657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.167787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.167798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.168066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.892 [2024-10-06 11:30:25.168076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.892 qpair failed and we were unable to recover it. 00:35:27.892 [2024-10-06 11:30:25.168258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.168269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.168386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.168395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.168567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.168577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.893 [2024-10-06 11:30:25.168756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.168769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:27.893 [2024-10-06 11:30:25.169041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.169052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.893 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:27.893 [2024-10-06 11:30:25.169319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.169331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.169591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.169602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.169835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.169845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.170103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.170114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.170376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.170387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.170568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.170578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.170834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.170844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.171073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.171083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.171287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.171298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.171415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.171425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.171593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.171605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.171784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.171794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.172029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.172039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.172308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.172319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.172504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.172513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.172724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.172735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.172953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.172963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.173218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.173229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.173439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.173449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.173703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.173713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.173915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.173925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.174104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.174114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.174279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.174288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.174567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.893 [2024-10-06 11:30:25.174578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.893 qpair failed and we were unable to recover it. 00:35:27.893 [2024-10-06 11:30:25.174879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.174889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.175165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.175175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.175435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.175444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.175609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.175619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.175756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.175766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.175949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.175958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.176212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.176223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.176473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.176484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.176663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.176674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.894 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:27.894 [2024-10-06 11:30:25.176933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.176944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.894 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:27.894 [2024-10-06 11:30:25.177198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.177210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.177407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.177419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.177696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.177706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.177944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.177954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.178187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.178198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.178487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.178497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.178621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.178632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.178867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.178877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.179025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.179035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.179232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.179243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.179497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.179508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.179714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.179724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.180003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.180013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.180270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.180281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.180461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.180473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.180663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.180673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.180802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.180812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.181061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.181071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.181235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.181245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.181421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.181431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.181628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.181638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.181755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.181765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.181971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.181981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.182181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.182192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.894 qpair failed and we were unable to recover it. 00:35:27.894 [2024-10-06 11:30:25.182391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.894 [2024-10-06 11:30:25.182400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.182584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.182595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.182779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.182788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.182951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.182961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.183167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.183177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.183411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.183421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.183583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.183593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.183864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.183875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.184125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.184136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.184297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.184307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.184568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.184579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.895 [2024-10-06 11:30:25.184830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.184841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:27.895 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.895 [2024-10-06 11:30:25.185087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.185098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:27.895 [2024-10-06 11:30:25.185280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.185291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.185541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.185551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.185747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.185759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.186014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.186025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.186206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.186216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.186472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.186482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.186675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.186685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.186950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.186960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.187217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.187228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.187413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.187424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.187608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.187618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.187848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.187858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.188098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.188108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.188390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.188400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.188638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.895 [2024-10-06 11:30:25.188648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f81cc000b90 with addr=10.0.0.2, port=4420 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 [2024-10-06 11:30:25.188662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.895 [2024-10-06 11:30:25.191093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.895 [2024-10-06 11:30:25.191176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.895 [2024-10-06 11:30:25.191197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.895 [2024-10-06 11:30:25.191205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.895 [2024-10-06 11:30:25.191212] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.895 [2024-10-06 11:30:25.191233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.895 qpair failed and we were unable to recover it. 00:35:27.895 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.895 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:27.895 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.895 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:27.895 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.895 [2024-10-06 11:30:25.201006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.895 [2024-10-06 11:30:25.201127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.895 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2272135 00:35:27.895 [2024-10-06 11:30:25.201145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.895 [2024-10-06 11:30:25.201152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.896 [2024-10-06 11:30:25.201159] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.896 [2024-10-06 11:30:25.201177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.896 qpair failed and we were unable to recover it. 00:35:27.896 [2024-10-06 11:30:25.211033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.896 [2024-10-06 11:30:25.211107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.896 [2024-10-06 11:30:25.211123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.896 [2024-10-06 11:30:25.211129] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.896 [2024-10-06 11:30:25.211135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.896 [2024-10-06 11:30:25.211151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.896 qpair failed and we were unable to recover it. 00:35:27.896 [2024-10-06 11:30:25.220995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.896 [2024-10-06 11:30:25.221068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.896 [2024-10-06 11:30:25.221084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.896 [2024-10-06 11:30:25.221090] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.896 [2024-10-06 11:30:25.221101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.896 [2024-10-06 11:30:25.221116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.896 qpair failed and we were unable to recover it. 00:35:27.896 [2024-10-06 11:30:25.230954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.896 [2024-10-06 11:30:25.231021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.896 [2024-10-06 11:30:25.231036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.896 [2024-10-06 11:30:25.231042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.896 [2024-10-06 11:30:25.231047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.896 [2024-10-06 11:30:25.231067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.896 qpair failed and we were unable to recover it. 00:35:27.896 [2024-10-06 11:30:25.240963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.896 [2024-10-06 11:30:25.241028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.896 [2024-10-06 11:30:25.241043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.896 [2024-10-06 11:30:25.241049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.896 [2024-10-06 11:30:25.241055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.896 [2024-10-06 11:30:25.241074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.896 qpair failed and we were unable to recover it. 00:35:27.896 [2024-10-06 11:30:25.250981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.896 [2024-10-06 11:30:25.251048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.896 [2024-10-06 11:30:25.251066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.896 [2024-10-06 11:30:25.251073] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.896 [2024-10-06 11:30:25.251078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.896 [2024-10-06 11:30:25.251093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.896 qpair failed and we were unable to recover it. 00:35:27.896 [2024-10-06 11:30:25.261002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.896 [2024-10-06 11:30:25.261073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.896 [2024-10-06 11:30:25.261088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.896 [2024-10-06 11:30:25.261095] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.896 [2024-10-06 11:30:25.261101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.896 [2024-10-06 11:30:25.261116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.896 qpair failed and we were unable to recover it. 00:35:27.896 [2024-10-06 11:30:25.271106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.896 [2024-10-06 11:30:25.271177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.896 [2024-10-06 11:30:25.271191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.896 [2024-10-06 11:30:25.271198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.896 [2024-10-06 11:30:25.271204] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.896 [2024-10-06 11:30:25.271218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.896 qpair failed and we were unable to recover it. 00:35:27.896 [2024-10-06 11:30:25.281095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.896 [2024-10-06 11:30:25.281159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.896 [2024-10-06 11:30:25.281174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.896 [2024-10-06 11:30:25.281180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.896 [2024-10-06 11:30:25.281186] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.896 [2024-10-06 11:30:25.281201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.896 qpair failed and we were unable to recover it. 00:35:27.896 [2024-10-06 11:30:25.291131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.896 [2024-10-06 11:30:25.291199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.896 [2024-10-06 11:30:25.291214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.896 [2024-10-06 11:30:25.291220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.896 [2024-10-06 11:30:25.291226] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.896 [2024-10-06 11:30:25.291241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.896 qpair failed and we were unable to recover it. 00:35:27.896 [2024-10-06 11:30:25.301083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.896 [2024-10-06 11:30:25.301153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.896 [2024-10-06 11:30:25.301167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.896 [2024-10-06 11:30:25.301174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.896 [2024-10-06 11:30:25.301180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.896 [2024-10-06 11:30:25.301194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.896 qpair failed and we were unable to recover it. 00:35:27.896 [2024-10-06 11:30:25.311172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.896 [2024-10-06 11:30:25.311240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.896 [2024-10-06 11:30:25.311255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.896 [2024-10-06 11:30:25.311265] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.896 [2024-10-06 11:30:25.311270] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.896 [2024-10-06 11:30:25.311285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.896 qpair failed and we were unable to recover it. 00:35:27.896 [2024-10-06 11:30:25.321163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.896 [2024-10-06 11:30:25.321229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.896 [2024-10-06 11:30:25.321244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.896 [2024-10-06 11:30:25.321251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.896 [2024-10-06 11:30:25.321257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.896 [2024-10-06 11:30:25.321272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.896 qpair failed and we were unable to recover it. 00:35:27.896 [2024-10-06 11:30:25.331230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.896 [2024-10-06 11:30:25.331293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.896 [2024-10-06 11:30:25.331307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.896 [2024-10-06 11:30:25.331314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.897 [2024-10-06 11:30:25.331320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.897 [2024-10-06 11:30:25.331334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.897 qpair failed and we were unable to recover it. 00:35:27.897 [2024-10-06 11:30:25.341274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.897 [2024-10-06 11:30:25.341343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.897 [2024-10-06 11:30:25.341358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.897 [2024-10-06 11:30:25.341365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.897 [2024-10-06 11:30:25.341370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.897 [2024-10-06 11:30:25.341386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.897 qpair failed and we were unable to recover it. 00:35:27.897 [2024-10-06 11:30:25.351275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.897 [2024-10-06 11:30:25.351338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.897 [2024-10-06 11:30:25.351353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.897 [2024-10-06 11:30:25.351359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.897 [2024-10-06 11:30:25.351366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.897 [2024-10-06 11:30:25.351380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.897 qpair failed and we were unable to recover it. 00:35:27.897 [2024-10-06 11:30:25.361335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.897 [2024-10-06 11:30:25.361398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.897 [2024-10-06 11:30:25.361413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.897 [2024-10-06 11:30:25.361420] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.897 [2024-10-06 11:30:25.361426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.897 [2024-10-06 11:30:25.361440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.897 qpair failed and we were unable to recover it. 00:35:27.897 [2024-10-06 11:30:25.371322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.897 [2024-10-06 11:30:25.371396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.897 [2024-10-06 11:30:25.371411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.897 [2024-10-06 11:30:25.371417] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.897 [2024-10-06 11:30:25.371423] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.897 [2024-10-06 11:30:25.371438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.897 qpair failed and we were unable to recover it. 00:35:27.897 [2024-10-06 11:30:25.381359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.897 [2024-10-06 11:30:25.381425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.897 [2024-10-06 11:30:25.381440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.897 [2024-10-06 11:30:25.381446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.897 [2024-10-06 11:30:25.381452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.897 [2024-10-06 11:30:25.381467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.897 qpair failed and we were unable to recover it. 00:35:27.897 [2024-10-06 11:30:25.391450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.897 [2024-10-06 11:30:25.391513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.897 [2024-10-06 11:30:25.391528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.897 [2024-10-06 11:30:25.391535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.897 [2024-10-06 11:30:25.391541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.897 [2024-10-06 11:30:25.391556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.897 qpair failed and we were unable to recover it. 00:35:27.897 [2024-10-06 11:30:25.401419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.897 [2024-10-06 11:30:25.401480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.897 [2024-10-06 11:30:25.401495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.897 [2024-10-06 11:30:25.401507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.897 [2024-10-06 11:30:25.401513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.897 [2024-10-06 11:30:25.401528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.897 qpair failed and we were unable to recover it. 00:35:27.897 [2024-10-06 11:30:25.411417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.897 [2024-10-06 11:30:25.411480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.897 [2024-10-06 11:30:25.411495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.897 [2024-10-06 11:30:25.411501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.897 [2024-10-06 11:30:25.411508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.897 [2024-10-06 11:30:25.411523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.897 qpair failed and we were unable to recover it. 00:35:27.897 [2024-10-06 11:30:25.421481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.897 [2024-10-06 11:30:25.421586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.897 [2024-10-06 11:30:25.421600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.897 [2024-10-06 11:30:25.421607] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.897 [2024-10-06 11:30:25.421613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.897 [2024-10-06 11:30:25.421628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.897 qpair failed and we were unable to recover it. 00:35:27.897 [2024-10-06 11:30:25.431532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.897 [2024-10-06 11:30:25.431597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.897 [2024-10-06 11:30:25.431613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.897 [2024-10-06 11:30:25.431619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.897 [2024-10-06 11:30:25.431625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.897 [2024-10-06 11:30:25.431640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.897 qpair failed and we were unable to recover it. 00:35:27.897 [2024-10-06 11:30:25.441548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.897 [2024-10-06 11:30:25.441614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.897 [2024-10-06 11:30:25.441628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.897 [2024-10-06 11:30:25.441635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.897 [2024-10-06 11:30:25.441641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.897 [2024-10-06 11:30:25.441656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.897 qpair failed and we were unable to recover it. 00:35:27.897 [2024-10-06 11:30:25.451554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:27.897 [2024-10-06 11:30:25.451620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:27.897 [2024-10-06 11:30:25.451634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:27.897 [2024-10-06 11:30:25.451641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:27.897 [2024-10-06 11:30:25.451647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:27.897 [2024-10-06 11:30:25.451661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:27.897 qpair failed and we were unable to recover it. 00:35:28.159 [2024-10-06 11:30:25.461541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.159 [2024-10-06 11:30:25.461606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.159 [2024-10-06 11:30:25.461620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.159 [2024-10-06 11:30:25.461626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.159 [2024-10-06 11:30:25.461632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.159 [2024-10-06 11:30:25.461646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.159 qpair failed and we were unable to recover it. 00:35:28.159 [2024-10-06 11:30:25.471640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.159 [2024-10-06 11:30:25.471703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.159 [2024-10-06 11:30:25.471718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.159 [2024-10-06 11:30:25.471724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.159 [2024-10-06 11:30:25.471730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.159 [2024-10-06 11:30:25.471744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.159 qpair failed and we were unable to recover it. 00:35:28.159 [2024-10-06 11:30:25.481643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.159 [2024-10-06 11:30:25.481702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.159 [2024-10-06 11:30:25.481716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.159 [2024-10-06 11:30:25.481723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.159 [2024-10-06 11:30:25.481729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.159 [2024-10-06 11:30:25.481743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.159 qpair failed and we were unable to recover it. 00:35:28.159 [2024-10-06 11:30:25.491670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.159 [2024-10-06 11:30:25.491730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.159 [2024-10-06 11:30:25.491747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.159 [2024-10-06 11:30:25.491753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.159 [2024-10-06 11:30:25.491759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.159 [2024-10-06 11:30:25.491773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.159 qpair failed and we were unable to recover it. 00:35:28.159 [2024-10-06 11:30:25.501753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.159 [2024-10-06 11:30:25.501821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.159 [2024-10-06 11:30:25.501836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.159 [2024-10-06 11:30:25.501842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.159 [2024-10-06 11:30:25.501848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.159 [2024-10-06 11:30:25.501863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.159 qpair failed and we were unable to recover it. 00:35:28.159 [2024-10-06 11:30:25.511727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.159 [2024-10-06 11:30:25.511790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.159 [2024-10-06 11:30:25.511805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.159 [2024-10-06 11:30:25.511811] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.159 [2024-10-06 11:30:25.511817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.159 [2024-10-06 11:30:25.511832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.159 qpair failed and we were unable to recover it. 00:35:28.159 [2024-10-06 11:30:25.521792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.159 [2024-10-06 11:30:25.521851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.159 [2024-10-06 11:30:25.521865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.159 [2024-10-06 11:30:25.521872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.159 [2024-10-06 11:30:25.521878] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.159 [2024-10-06 11:30:25.521892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.159 qpair failed and we were unable to recover it. 00:35:28.159 [2024-10-06 11:30:25.531777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.159 [2024-10-06 11:30:25.531878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.159 [2024-10-06 11:30:25.531893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.159 [2024-10-06 11:30:25.531900] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.159 [2024-10-06 11:30:25.531906] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.159 [2024-10-06 11:30:25.531923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.159 qpair failed and we were unable to recover it. 00:35:28.159 [2024-10-06 11:30:25.541842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.159 [2024-10-06 11:30:25.541908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.159 [2024-10-06 11:30:25.541922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.159 [2024-10-06 11:30:25.541929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.159 [2024-10-06 11:30:25.541935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.159 [2024-10-06 11:30:25.541950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.159 qpair failed and we were unable to recover it. 00:35:28.159 [2024-10-06 11:30:25.551841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.159 [2024-10-06 11:30:25.551910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.159 [2024-10-06 11:30:25.551924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.159 [2024-10-06 11:30:25.551931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.159 [2024-10-06 11:30:25.551937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.159 [2024-10-06 11:30:25.551952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.159 qpair failed and we were unable to recover it. 00:35:28.159 [2024-10-06 11:30:25.561913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.159 [2024-10-06 11:30:25.561985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.159 [2024-10-06 11:30:25.562000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.159 [2024-10-06 11:30:25.562006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.159 [2024-10-06 11:30:25.562012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.159 [2024-10-06 11:30:25.562027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.159 qpair failed and we were unable to recover it. 00:35:28.159 [2024-10-06 11:30:25.571933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.159 [2024-10-06 11:30:25.571994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.159 [2024-10-06 11:30:25.572009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.159 [2024-10-06 11:30:25.572015] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.159 [2024-10-06 11:30:25.572021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.159 [2024-10-06 11:30:25.572035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.159 qpair failed and we were unable to recover it. 00:35:28.159 [2024-10-06 11:30:25.581961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.160 [2024-10-06 11:30:25.582026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.160 [2024-10-06 11:30:25.582044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.160 [2024-10-06 11:30:25.582051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.160 [2024-10-06 11:30:25.582057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.160 [2024-10-06 11:30:25.582076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.160 qpair failed and we were unable to recover it. 00:35:28.160 [2024-10-06 11:30:25.591944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.160 [2024-10-06 11:30:25.592007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.160 [2024-10-06 11:30:25.592022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.160 [2024-10-06 11:30:25.592029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.160 [2024-10-06 11:30:25.592034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.160 [2024-10-06 11:30:25.592050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.160 qpair failed and we were unable to recover it. 00:35:28.160 [2024-10-06 11:30:25.601985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.160 [2024-10-06 11:30:25.602050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.160 [2024-10-06 11:30:25.602067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.160 [2024-10-06 11:30:25.602074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.160 [2024-10-06 11:30:25.602080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.160 [2024-10-06 11:30:25.602095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.160 qpair failed and we were unable to recover it. 00:35:28.160 [2024-10-06 11:30:25.612025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.160 [2024-10-06 11:30:25.612092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.160 [2024-10-06 11:30:25.612107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.160 [2024-10-06 11:30:25.612113] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.160 [2024-10-06 11:30:25.612119] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.160 [2024-10-06 11:30:25.612134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.160 qpair failed and we were unable to recover it. 00:35:28.160 [2024-10-06 11:30:25.622051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.160 [2024-10-06 11:30:25.622141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.160 [2024-10-06 11:30:25.622155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.160 [2024-10-06 11:30:25.622162] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.160 [2024-10-06 11:30:25.622167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.160 [2024-10-06 11:30:25.622186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.160 qpair failed and we were unable to recover it. 00:35:28.160 [2024-10-06 11:30:25.632017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.160 [2024-10-06 11:30:25.632082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.160 [2024-10-06 11:30:25.632097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.160 [2024-10-06 11:30:25.632104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.160 [2024-10-06 11:30:25.632110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.160 [2024-10-06 11:30:25.632124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.160 qpair failed and we were unable to recover it. 00:35:28.160 [2024-10-06 11:30:25.642086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.160 [2024-10-06 11:30:25.642149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.160 [2024-10-06 11:30:25.642163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.160 [2024-10-06 11:30:25.642170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.160 [2024-10-06 11:30:25.642176] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.160 [2024-10-06 11:30:25.642190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.160 qpair failed and we were unable to recover it. 00:35:28.160 [2024-10-06 11:30:25.652055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.160 [2024-10-06 11:30:25.652145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.160 [2024-10-06 11:30:25.652160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.160 [2024-10-06 11:30:25.652166] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.160 [2024-10-06 11:30:25.652171] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.160 [2024-10-06 11:30:25.652186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.160 qpair failed and we were unable to recover it. 00:35:28.160 [2024-10-06 11:30:25.662177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.160 [2024-10-06 11:30:25.662245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.160 [2024-10-06 11:30:25.662260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.160 [2024-10-06 11:30:25.662266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.160 [2024-10-06 11:30:25.662272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.160 [2024-10-06 11:30:25.662288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.160 qpair failed and we were unable to recover it. 00:35:28.160 [2024-10-06 11:30:25.672165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.160 [2024-10-06 11:30:25.672230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.160 [2024-10-06 11:30:25.672249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.160 [2024-10-06 11:30:25.672256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.160 [2024-10-06 11:30:25.672262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.160 [2024-10-06 11:30:25.672278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.160 qpair failed and we were unable to recover it. 00:35:28.160 [2024-10-06 11:30:25.682144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.160 [2024-10-06 11:30:25.682209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.160 [2024-10-06 11:30:25.682224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.160 [2024-10-06 11:30:25.682230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.160 [2024-10-06 11:30:25.682236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.160 [2024-10-06 11:30:25.682251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.160 qpair failed and we were unable to recover it. 00:35:28.160 [2024-10-06 11:30:25.692229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.160 [2024-10-06 11:30:25.692312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.160 [2024-10-06 11:30:25.692326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.160 [2024-10-06 11:30:25.692332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.160 [2024-10-06 11:30:25.692338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.160 [2024-10-06 11:30:25.692353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.160 qpair failed and we were unable to recover it. 00:35:28.160 [2024-10-06 11:30:25.702203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.160 [2024-10-06 11:30:25.702269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.160 [2024-10-06 11:30:25.702283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.160 [2024-10-06 11:30:25.702289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.160 [2024-10-06 11:30:25.702295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.160 [2024-10-06 11:30:25.702310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.160 qpair failed and we were unable to recover it. 00:35:28.160 [2024-10-06 11:30:25.712277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.161 [2024-10-06 11:30:25.712347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.161 [2024-10-06 11:30:25.712361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.161 [2024-10-06 11:30:25.712368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.161 [2024-10-06 11:30:25.712377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.161 [2024-10-06 11:30:25.712392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.161 qpair failed and we were unable to recover it. 00:35:28.161 [2024-10-06 11:30:25.722342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.161 [2024-10-06 11:30:25.722404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.161 [2024-10-06 11:30:25.722419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.161 [2024-10-06 11:30:25.722425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.161 [2024-10-06 11:30:25.722431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.161 [2024-10-06 11:30:25.722446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.161 qpair failed and we were unable to recover it. 00:35:28.422 [2024-10-06 11:30:25.732278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.422 [2024-10-06 11:30:25.732342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.422 [2024-10-06 11:30:25.732356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.422 [2024-10-06 11:30:25.732363] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.422 [2024-10-06 11:30:25.732369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.422 [2024-10-06 11:30:25.732383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.422 qpair failed and we were unable to recover it. 00:35:28.422 [2024-10-06 11:30:25.742396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.422 [2024-10-06 11:30:25.742469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.422 [2024-10-06 11:30:25.742483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.422 [2024-10-06 11:30:25.742490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.422 [2024-10-06 11:30:25.742496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.422 [2024-10-06 11:30:25.742510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.422 qpair failed and we were unable to recover it. 00:35:28.422 [2024-10-06 11:30:25.752448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.422 [2024-10-06 11:30:25.752515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.422 [2024-10-06 11:30:25.752529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.422 [2024-10-06 11:30:25.752535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.422 [2024-10-06 11:30:25.752541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.422 [2024-10-06 11:30:25.752556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.422 qpair failed and we were unable to recover it. 00:35:28.422 [2024-10-06 11:30:25.762472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.422 [2024-10-06 11:30:25.762536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.422 [2024-10-06 11:30:25.762551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.422 [2024-10-06 11:30:25.762557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.422 [2024-10-06 11:30:25.762563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.422 [2024-10-06 11:30:25.762577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.422 qpair failed and we were unable to recover it. 00:35:28.422 [2024-10-06 11:30:25.772514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.422 [2024-10-06 11:30:25.772581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.422 [2024-10-06 11:30:25.772595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.422 [2024-10-06 11:30:25.772602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.422 [2024-10-06 11:30:25.772608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.422 [2024-10-06 11:30:25.772622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.422 qpair failed and we were unable to recover it. 00:35:28.422 [2024-10-06 11:30:25.782452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.422 [2024-10-06 11:30:25.782515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.422 [2024-10-06 11:30:25.782529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.422 [2024-10-06 11:30:25.782536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.422 [2024-10-06 11:30:25.782542] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.422 [2024-10-06 11:30:25.782556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.422 qpair failed and we were unable to recover it. 00:35:28.422 [2024-10-06 11:30:25.792514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.422 [2024-10-06 11:30:25.792579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.422 [2024-10-06 11:30:25.792593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.423 [2024-10-06 11:30:25.792599] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.423 [2024-10-06 11:30:25.792606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.423 [2024-10-06 11:30:25.792620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.423 qpair failed and we were unable to recover it. 00:35:28.423 [2024-10-06 11:30:25.802525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.423 [2024-10-06 11:30:25.802590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.423 [2024-10-06 11:30:25.802604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.423 [2024-10-06 11:30:25.802611] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.423 [2024-10-06 11:30:25.802620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.423 [2024-10-06 11:30:25.802635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.423 qpair failed and we were unable to recover it. 00:35:28.423 [2024-10-06 11:30:25.812565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.423 [2024-10-06 11:30:25.812626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.423 [2024-10-06 11:30:25.812640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.423 [2024-10-06 11:30:25.812647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.423 [2024-10-06 11:30:25.812653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.423 [2024-10-06 11:30:25.812667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.423 qpair failed and we were unable to recover it. 00:35:28.423 [2024-10-06 11:30:25.822614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.423 [2024-10-06 11:30:25.822681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.423 [2024-10-06 11:30:25.822695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.423 [2024-10-06 11:30:25.822702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.423 [2024-10-06 11:30:25.822708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.423 [2024-10-06 11:30:25.822722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.423 qpair failed and we were unable to recover it. 00:35:28.423 [2024-10-06 11:30:25.832574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.423 [2024-10-06 11:30:25.832639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.423 [2024-10-06 11:30:25.832654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.423 [2024-10-06 11:30:25.832660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.423 [2024-10-06 11:30:25.832666] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.423 [2024-10-06 11:30:25.832680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.423 qpair failed and we were unable to recover it. 00:35:28.423 [2024-10-06 11:30:25.842711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.423 [2024-10-06 11:30:25.842783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.423 [2024-10-06 11:30:25.842798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.423 [2024-10-06 11:30:25.842805] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.423 [2024-10-06 11:30:25.842811] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.423 [2024-10-06 11:30:25.842826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.423 qpair failed and we were unable to recover it. 00:35:28.423 [2024-10-06 11:30:25.852616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.423 [2024-10-06 11:30:25.852680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.423 [2024-10-06 11:30:25.852696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.423 [2024-10-06 11:30:25.852702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.423 [2024-10-06 11:30:25.852709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.423 [2024-10-06 11:30:25.852724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.423 qpair failed and we were unable to recover it. 00:35:28.423 [2024-10-06 11:30:25.862724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.423 [2024-10-06 11:30:25.862796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.423 [2024-10-06 11:30:25.862811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.423 [2024-10-06 11:30:25.862818] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.423 [2024-10-06 11:30:25.862824] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.423 [2024-10-06 11:30:25.862838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.423 qpair failed and we were unable to recover it. 00:35:28.423 [2024-10-06 11:30:25.872771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.423 [2024-10-06 11:30:25.872881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.423 [2024-10-06 11:30:25.872903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.423 [2024-10-06 11:30:25.872910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.423 [2024-10-06 11:30:25.872916] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.423 [2024-10-06 11:30:25.872930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.423 qpair failed and we were unable to recover it. 00:35:28.423 [2024-10-06 11:30:25.882770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.423 [2024-10-06 11:30:25.882836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.423 [2024-10-06 11:30:25.882850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.423 [2024-10-06 11:30:25.882857] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.423 [2024-10-06 11:30:25.882862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.423 [2024-10-06 11:30:25.882877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.423 qpair failed and we were unable to recover it. 00:35:28.423 [2024-10-06 11:30:25.892733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.423 [2024-10-06 11:30:25.892797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.423 [2024-10-06 11:30:25.892811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.423 [2024-10-06 11:30:25.892821] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.423 [2024-10-06 11:30:25.892827] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.423 [2024-10-06 11:30:25.892841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.423 qpair failed and we were unable to recover it. 00:35:28.423 [2024-10-06 11:30:25.902859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.423 [2024-10-06 11:30:25.902926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.423 [2024-10-06 11:30:25.902941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.423 [2024-10-06 11:30:25.902948] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.423 [2024-10-06 11:30:25.902954] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.423 [2024-10-06 11:30:25.902969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.423 qpair failed and we were unable to recover it. 00:35:28.423 [2024-10-06 11:30:25.912882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.423 [2024-10-06 11:30:25.912953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.423 [2024-10-06 11:30:25.912967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.423 [2024-10-06 11:30:25.912974] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.423 [2024-10-06 11:30:25.912980] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.423 [2024-10-06 11:30:25.912995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.423 qpair failed and we were unable to recover it. 00:35:28.423 [2024-10-06 11:30:25.922848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.423 [2024-10-06 11:30:25.922915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.424 [2024-10-06 11:30:25.922930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.424 [2024-10-06 11:30:25.922937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.424 [2024-10-06 11:30:25.922942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.424 [2024-10-06 11:30:25.922957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.424 qpair failed and we were unable to recover it. 00:35:28.424 [2024-10-06 11:30:25.932886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.424 [2024-10-06 11:30:25.932951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.424 [2024-10-06 11:30:25.932966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.424 [2024-10-06 11:30:25.932972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.424 [2024-10-06 11:30:25.932978] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.424 [2024-10-06 11:30:25.932993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.424 qpair failed and we were unable to recover it. 00:35:28.424 [2024-10-06 11:30:25.942990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.424 [2024-10-06 11:30:25.943062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.424 [2024-10-06 11:30:25.943077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.424 [2024-10-06 11:30:25.943084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.424 [2024-10-06 11:30:25.943090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.424 [2024-10-06 11:30:25.943105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.424 qpair failed and we were unable to recover it. 00:35:28.424 [2024-10-06 11:30:25.952948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.424 [2024-10-06 11:30:25.953013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.424 [2024-10-06 11:30:25.953027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.424 [2024-10-06 11:30:25.953034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.424 [2024-10-06 11:30:25.953039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.424 [2024-10-06 11:30:25.953054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.424 qpair failed and we were unable to recover it. 00:35:28.424 [2024-10-06 11:30:25.963035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.424 [2024-10-06 11:30:25.963103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.424 [2024-10-06 11:30:25.963118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.424 [2024-10-06 11:30:25.963125] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.424 [2024-10-06 11:30:25.963131] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.424 [2024-10-06 11:30:25.963146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.424 qpair failed and we were unable to recover it. 00:35:28.424 [2024-10-06 11:30:25.973085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.424 [2024-10-06 11:30:25.973186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.424 [2024-10-06 11:30:25.973201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.424 [2024-10-06 11:30:25.973207] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.424 [2024-10-06 11:30:25.973214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.424 [2024-10-06 11:30:25.973228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.424 qpair failed and we were unable to recover it. 00:35:28.424 [2024-10-06 11:30:25.983118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.424 [2024-10-06 11:30:25.983190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.424 [2024-10-06 11:30:25.983208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.424 [2024-10-06 11:30:25.983215] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.424 [2024-10-06 11:30:25.983220] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.424 [2024-10-06 11:30:25.983235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.424 qpair failed and we were unable to recover it. 00:35:28.424 [2024-10-06 11:30:25.993090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.424 [2024-10-06 11:30:25.993160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.424 [2024-10-06 11:30:25.993175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.424 [2024-10-06 11:30:25.993182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.424 [2024-10-06 11:30:25.993188] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.424 [2024-10-06 11:30:25.993202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.424 qpair failed and we were unable to recover it. 00:35:28.685 [2024-10-06 11:30:26.003276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.685 [2024-10-06 11:30:26.003353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.685 [2024-10-06 11:30:26.003368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.685 [2024-10-06 11:30:26.003375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.685 [2024-10-06 11:30:26.003381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.685 [2024-10-06 11:30:26.003397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.685 qpair failed and we were unable to recover it. 00:35:28.685 [2024-10-06 11:30:26.013163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.685 [2024-10-06 11:30:26.013233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.685 [2024-10-06 11:30:26.013248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.685 [2024-10-06 11:30:26.013254] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.685 [2024-10-06 11:30:26.013260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.685 [2024-10-06 11:30:26.013275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.685 qpair failed and we were unable to recover it. 00:35:28.685 [2024-10-06 11:30:26.023212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.685 [2024-10-06 11:30:26.023333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.685 [2024-10-06 11:30:26.023348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.685 [2024-10-06 11:30:26.023355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.685 [2024-10-06 11:30:26.023361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.685 [2024-10-06 11:30:26.023377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.685 qpair failed and we were unable to recover it. 00:35:28.685 [2024-10-06 11:30:26.033257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.685 [2024-10-06 11:30:26.033322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.685 [2024-10-06 11:30:26.033337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.685 [2024-10-06 11:30:26.033344] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.685 [2024-10-06 11:30:26.033349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.685 [2024-10-06 11:30:26.033365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.685 qpair failed and we were unable to recover it. 00:35:28.685 [2024-10-06 11:30:26.043219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.685 [2024-10-06 11:30:26.043319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.685 [2024-10-06 11:30:26.043335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.685 [2024-10-06 11:30:26.043342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.685 [2024-10-06 11:30:26.043347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.685 [2024-10-06 11:30:26.043363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.685 qpair failed and we were unable to recover it. 00:35:28.685 [2024-10-06 11:30:26.053201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.685 [2024-10-06 11:30:26.053264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.685 [2024-10-06 11:30:26.053280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.685 [2024-10-06 11:30:26.053287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.685 [2024-10-06 11:30:26.053293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.685 [2024-10-06 11:30:26.053308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.685 qpair failed and we were unable to recover it. 00:35:28.685 [2024-10-06 11:30:26.063316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.685 [2024-10-06 11:30:26.063397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.685 [2024-10-06 11:30:26.063411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.685 [2024-10-06 11:30:26.063417] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.685 [2024-10-06 11:30:26.063423] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.685 [2024-10-06 11:30:26.063438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.685 qpair failed and we were unable to recover it. 00:35:28.685 [2024-10-06 11:30:26.073338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.685 [2024-10-06 11:30:26.073404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.685 [2024-10-06 11:30:26.073422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.685 [2024-10-06 11:30:26.073428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.685 [2024-10-06 11:30:26.073434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.685 [2024-10-06 11:30:26.073448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.685 qpair failed and we were unable to recover it. 00:35:28.685 [2024-10-06 11:30:26.083340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.685 [2024-10-06 11:30:26.083408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.685 [2024-10-06 11:30:26.083423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.685 [2024-10-06 11:30:26.083429] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.685 [2024-10-06 11:30:26.083435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.685 [2024-10-06 11:30:26.083449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.685 qpair failed and we were unable to recover it. 00:35:28.685 [2024-10-06 11:30:26.093432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.685 [2024-10-06 11:30:26.093534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.685 [2024-10-06 11:30:26.093548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.686 [2024-10-06 11:30:26.093555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.686 [2024-10-06 11:30:26.093561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.686 [2024-10-06 11:30:26.093576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.686 qpair failed and we were unable to recover it. 00:35:28.686 [2024-10-06 11:30:26.103457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.686 [2024-10-06 11:30:26.103522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.686 [2024-10-06 11:30:26.103536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.686 [2024-10-06 11:30:26.103543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.686 [2024-10-06 11:30:26.103548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.686 [2024-10-06 11:30:26.103563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.686 qpair failed and we were unable to recover it. 00:35:28.686 [2024-10-06 11:30:26.113496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.686 [2024-10-06 11:30:26.113599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.686 [2024-10-06 11:30:26.113613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.686 [2024-10-06 11:30:26.113619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.686 [2024-10-06 11:30:26.113626] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.686 [2024-10-06 11:30:26.113647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.686 qpair failed and we were unable to recover it. 00:35:28.686 [2024-10-06 11:30:26.123484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.686 [2024-10-06 11:30:26.123582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.686 [2024-10-06 11:30:26.123596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.686 [2024-10-06 11:30:26.123602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.686 [2024-10-06 11:30:26.123608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.686 [2024-10-06 11:30:26.123623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.686 qpair failed and we were unable to recover it. 00:35:28.686 [2024-10-06 11:30:26.133492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.686 [2024-10-06 11:30:26.133557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.686 [2024-10-06 11:30:26.133572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.686 [2024-10-06 11:30:26.133578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.686 [2024-10-06 11:30:26.133584] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.686 [2024-10-06 11:30:26.133599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.686 qpair failed and we were unable to recover it. 00:35:28.686 [2024-10-06 11:30:26.143540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.686 [2024-10-06 11:30:26.143606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.686 [2024-10-06 11:30:26.143620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.686 [2024-10-06 11:30:26.143627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.686 [2024-10-06 11:30:26.143632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.686 [2024-10-06 11:30:26.143647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.686 qpair failed and we were unable to recover it. 00:35:28.686 [2024-10-06 11:30:26.153572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.686 [2024-10-06 11:30:26.153652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.686 [2024-10-06 11:30:26.153667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.686 [2024-10-06 11:30:26.153673] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.686 [2024-10-06 11:30:26.153679] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.686 [2024-10-06 11:30:26.153694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.686 qpair failed and we were unable to recover it. 00:35:28.686 [2024-10-06 11:30:26.163634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.686 [2024-10-06 11:30:26.163698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.686 [2024-10-06 11:30:26.163715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.686 [2024-10-06 11:30:26.163722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.686 [2024-10-06 11:30:26.163727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.686 [2024-10-06 11:30:26.163742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.686 qpair failed and we were unable to recover it. 00:35:28.686 [2024-10-06 11:30:26.173629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.686 [2024-10-06 11:30:26.173690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.686 [2024-10-06 11:30:26.173705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.686 [2024-10-06 11:30:26.173711] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.686 [2024-10-06 11:30:26.173717] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.686 [2024-10-06 11:30:26.173731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.686 qpair failed and we were unable to recover it. 00:35:28.686 [2024-10-06 11:30:26.183630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.686 [2024-10-06 11:30:26.183698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.686 [2024-10-06 11:30:26.183712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.686 [2024-10-06 11:30:26.183718] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.686 [2024-10-06 11:30:26.183724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.686 [2024-10-06 11:30:26.183739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.686 qpair failed and we were unable to recover it. 00:35:28.686 [2024-10-06 11:30:26.193694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.686 [2024-10-06 11:30:26.193762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.686 [2024-10-06 11:30:26.193776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.686 [2024-10-06 11:30:26.193782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.686 [2024-10-06 11:30:26.193788] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.686 [2024-10-06 11:30:26.193802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.686 qpair failed and we were unable to recover it. 00:35:28.686 [2024-10-06 11:30:26.203631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.686 [2024-10-06 11:30:26.203692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.686 [2024-10-06 11:30:26.203706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.686 [2024-10-06 11:30:26.203713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.686 [2024-10-06 11:30:26.203722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.686 [2024-10-06 11:30:26.203736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.686 qpair failed and we were unable to recover it. 00:35:28.686 [2024-10-06 11:30:26.213707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.686 [2024-10-06 11:30:26.213770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.686 [2024-10-06 11:30:26.213785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.686 [2024-10-06 11:30:26.213791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.686 [2024-10-06 11:30:26.213797] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.686 [2024-10-06 11:30:26.213811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.686 qpair failed and we were unable to recover it. 00:35:28.686 [2024-10-06 11:30:26.223785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.686 [2024-10-06 11:30:26.223853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.687 [2024-10-06 11:30:26.223867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.687 [2024-10-06 11:30:26.223873] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.687 [2024-10-06 11:30:26.223879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.687 [2024-10-06 11:30:26.223893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.687 qpair failed and we were unable to recover it. 00:35:28.687 [2024-10-06 11:30:26.233760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.687 [2024-10-06 11:30:26.233866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.687 [2024-10-06 11:30:26.233881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.687 [2024-10-06 11:30:26.233888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.687 [2024-10-06 11:30:26.233894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.687 [2024-10-06 11:30:26.233909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.687 qpair failed and we were unable to recover it. 00:35:28.687 [2024-10-06 11:30:26.243811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.687 [2024-10-06 11:30:26.243870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.687 [2024-10-06 11:30:26.243884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.687 [2024-10-06 11:30:26.243891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.687 [2024-10-06 11:30:26.243897] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.687 [2024-10-06 11:30:26.243911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.687 qpair failed and we were unable to recover it. 00:35:28.687 [2024-10-06 11:30:26.253816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.687 [2024-10-06 11:30:26.253883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.687 [2024-10-06 11:30:26.253897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.687 [2024-10-06 11:30:26.253904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.687 [2024-10-06 11:30:26.253909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.687 [2024-10-06 11:30:26.253924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.687 qpair failed and we were unable to recover it. 00:35:28.947 [2024-10-06 11:30:26.263901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.947 [2024-10-06 11:30:26.263968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.947 [2024-10-06 11:30:26.263982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.947 [2024-10-06 11:30:26.263989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.947 [2024-10-06 11:30:26.263995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.947 [2024-10-06 11:30:26.264010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.947 qpair failed and we were unable to recover it. 00:35:28.947 [2024-10-06 11:30:26.273950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.947 [2024-10-06 11:30:26.274017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.947 [2024-10-06 11:30:26.274032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.947 [2024-10-06 11:30:26.274038] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.947 [2024-10-06 11:30:26.274044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.947 [2024-10-06 11:30:26.274062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.947 qpair failed and we were unable to recover it. 00:35:28.947 [2024-10-06 11:30:26.283919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.947 [2024-10-06 11:30:26.284000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.947 [2024-10-06 11:30:26.284015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.947 [2024-10-06 11:30:26.284021] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.947 [2024-10-06 11:30:26.284027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.947 [2024-10-06 11:30:26.284041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.947 qpair failed and we were unable to recover it. 00:35:28.947 [2024-10-06 11:30:26.293965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.947 [2024-10-06 11:30:26.294024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.947 [2024-10-06 11:30:26.294038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.947 [2024-10-06 11:30:26.294045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.947 [2024-10-06 11:30:26.294054] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.947 [2024-10-06 11:30:26.294073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.947 qpair failed and we were unable to recover it. 00:35:28.948 [2024-10-06 11:30:26.303994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.948 [2024-10-06 11:30:26.304084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.948 [2024-10-06 11:30:26.304099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.948 [2024-10-06 11:30:26.304105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.948 [2024-10-06 11:30:26.304111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.948 [2024-10-06 11:30:26.304126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.948 qpair failed and we were unable to recover it. 00:35:28.948 [2024-10-06 11:30:26.314017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.948 [2024-10-06 11:30:26.314104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.948 [2024-10-06 11:30:26.314118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.948 [2024-10-06 11:30:26.314124] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.948 [2024-10-06 11:30:26.314130] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.948 [2024-10-06 11:30:26.314145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.948 qpair failed and we were unable to recover it. 00:35:28.948 [2024-10-06 11:30:26.324038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.948 [2024-10-06 11:30:26.324107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.948 [2024-10-06 11:30:26.324121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.948 [2024-10-06 11:30:26.324128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.948 [2024-10-06 11:30:26.324133] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.948 [2024-10-06 11:30:26.324148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.948 qpair failed and we were unable to recover it. 00:35:28.948 [2024-10-06 11:30:26.334090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.948 [2024-10-06 11:30:26.334152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.948 [2024-10-06 11:30:26.334166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.948 [2024-10-06 11:30:26.334173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.948 [2024-10-06 11:30:26.334178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.948 [2024-10-06 11:30:26.334193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.948 qpair failed and we were unable to recover it. 00:35:28.948 [2024-10-06 11:30:26.344138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.948 [2024-10-06 11:30:26.344245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.948 [2024-10-06 11:30:26.344260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.948 [2024-10-06 11:30:26.344266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.948 [2024-10-06 11:30:26.344273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.948 [2024-10-06 11:30:26.344287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.948 qpair failed and we were unable to recover it. 00:35:28.948 [2024-10-06 11:30:26.354147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.948 [2024-10-06 11:30:26.354207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.948 [2024-10-06 11:30:26.354221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.948 [2024-10-06 11:30:26.354227] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.948 [2024-10-06 11:30:26.354233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.948 [2024-10-06 11:30:26.354247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.948 qpair failed and we were unable to recover it. 00:35:28.948 [2024-10-06 11:30:26.364237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.948 [2024-10-06 11:30:26.364364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.948 [2024-10-06 11:30:26.364380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.948 [2024-10-06 11:30:26.364386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.948 [2024-10-06 11:30:26.364392] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.948 [2024-10-06 11:30:26.364407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.948 qpair failed and we were unable to recover it. 00:35:28.948 [2024-10-06 11:30:26.374197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.948 [2024-10-06 11:30:26.374269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.948 [2024-10-06 11:30:26.374283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.948 [2024-10-06 11:30:26.374289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.948 [2024-10-06 11:30:26.374295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.948 [2024-10-06 11:30:26.374310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.948 qpair failed and we were unable to recover it. 00:35:28.948 [2024-10-06 11:30:26.384241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.948 [2024-10-06 11:30:26.384309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.948 [2024-10-06 11:30:26.384323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.948 [2024-10-06 11:30:26.384333] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.948 [2024-10-06 11:30:26.384339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.948 [2024-10-06 11:30:26.384353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.948 qpair failed and we were unable to recover it. 00:35:28.948 [2024-10-06 11:30:26.394238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.948 [2024-10-06 11:30:26.394305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.948 [2024-10-06 11:30:26.394319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.948 [2024-10-06 11:30:26.394325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.948 [2024-10-06 11:30:26.394331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.948 [2024-10-06 11:30:26.394345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.948 qpair failed and we were unable to recover it. 00:35:28.948 [2024-10-06 11:30:26.404272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.948 [2024-10-06 11:30:26.404334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.948 [2024-10-06 11:30:26.404348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.948 [2024-10-06 11:30:26.404354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.948 [2024-10-06 11:30:26.404360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.948 [2024-10-06 11:30:26.404374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.948 qpair failed and we were unable to recover it. 00:35:28.948 [2024-10-06 11:30:26.414352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.948 [2024-10-06 11:30:26.414415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.948 [2024-10-06 11:30:26.414429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.948 [2024-10-06 11:30:26.414436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.948 [2024-10-06 11:30:26.414442] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.948 [2024-10-06 11:30:26.414456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.948 qpair failed and we were unable to recover it. 00:35:28.948 [2024-10-06 11:30:26.424342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.948 [2024-10-06 11:30:26.424415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.948 [2024-10-06 11:30:26.424429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.948 [2024-10-06 11:30:26.424436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.948 [2024-10-06 11:30:26.424442] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.948 [2024-10-06 11:30:26.424456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.948 qpair failed and we were unable to recover it. 00:35:28.949 [2024-10-06 11:30:26.434377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.949 [2024-10-06 11:30:26.434445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.949 [2024-10-06 11:30:26.434459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.949 [2024-10-06 11:30:26.434466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.949 [2024-10-06 11:30:26.434472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.949 [2024-10-06 11:30:26.434486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.949 qpair failed and we were unable to recover it. 00:35:28.949 [2024-10-06 11:30:26.444315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.949 [2024-10-06 11:30:26.444418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.949 [2024-10-06 11:30:26.444432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.949 [2024-10-06 11:30:26.444439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.949 [2024-10-06 11:30:26.444445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.949 [2024-10-06 11:30:26.444460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.949 qpair failed and we were unable to recover it. 00:35:28.949 [2024-10-06 11:30:26.454411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.949 [2024-10-06 11:30:26.454477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.949 [2024-10-06 11:30:26.454492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.949 [2024-10-06 11:30:26.454499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.949 [2024-10-06 11:30:26.454504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.949 [2024-10-06 11:30:26.454518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.949 qpair failed and we were unable to recover it. 00:35:28.949 [2024-10-06 11:30:26.464471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.949 [2024-10-06 11:30:26.464536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.949 [2024-10-06 11:30:26.464551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.949 [2024-10-06 11:30:26.464558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.949 [2024-10-06 11:30:26.464564] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.949 [2024-10-06 11:30:26.464578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.949 qpair failed and we were unable to recover it. 00:35:28.949 [2024-10-06 11:30:26.474486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.949 [2024-10-06 11:30:26.474556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.949 [2024-10-06 11:30:26.474571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.949 [2024-10-06 11:30:26.474581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.949 [2024-10-06 11:30:26.474587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.949 [2024-10-06 11:30:26.474602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.949 qpair failed and we were unable to recover it. 00:35:28.949 [2024-10-06 11:30:26.484508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.949 [2024-10-06 11:30:26.484572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.949 [2024-10-06 11:30:26.484587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.949 [2024-10-06 11:30:26.484594] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.949 [2024-10-06 11:30:26.484600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.949 [2024-10-06 11:30:26.484615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.949 qpair failed and we were unable to recover it. 00:35:28.949 [2024-10-06 11:30:26.494538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.949 [2024-10-06 11:30:26.494639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.949 [2024-10-06 11:30:26.494654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.949 [2024-10-06 11:30:26.494661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.949 [2024-10-06 11:30:26.494667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.949 [2024-10-06 11:30:26.494683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.949 qpair failed and we were unable to recover it. 00:35:28.949 [2024-10-06 11:30:26.504502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.949 [2024-10-06 11:30:26.504568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.949 [2024-10-06 11:30:26.504583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.949 [2024-10-06 11:30:26.504589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.949 [2024-10-06 11:30:26.504595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.949 [2024-10-06 11:30:26.504610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.949 qpair failed and we were unable to recover it. 00:35:28.949 [2024-10-06 11:30:26.514618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:28.949 [2024-10-06 11:30:26.514698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:28.949 [2024-10-06 11:30:26.514712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:28.949 [2024-10-06 11:30:26.514719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:28.949 [2024-10-06 11:30:26.514725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:28.949 [2024-10-06 11:30:26.514739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:28.949 qpair failed and we were unable to recover it. 00:35:29.210 [2024-10-06 11:30:26.524627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.210 [2024-10-06 11:30:26.524698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.210 [2024-10-06 11:30:26.524712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.210 [2024-10-06 11:30:26.524719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.210 [2024-10-06 11:30:26.524725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.210 [2024-10-06 11:30:26.524739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.210 qpair failed and we were unable to recover it. 00:35:29.210 [2024-10-06 11:30:26.534649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.210 [2024-10-06 11:30:26.534708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.210 [2024-10-06 11:30:26.534722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.210 [2024-10-06 11:30:26.534729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.210 [2024-10-06 11:30:26.534735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.210 [2024-10-06 11:30:26.534749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.210 qpair failed and we were unable to recover it. 00:35:29.210 [2024-10-06 11:30:26.544684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.210 [2024-10-06 11:30:26.544748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.210 [2024-10-06 11:30:26.544762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.210 [2024-10-06 11:30:26.544769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.210 [2024-10-06 11:30:26.544775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.210 [2024-10-06 11:30:26.544789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.210 qpair failed and we were unable to recover it. 00:35:29.210 [2024-10-06 11:30:26.554699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.210 [2024-10-06 11:30:26.554761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.210 [2024-10-06 11:30:26.554776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.210 [2024-10-06 11:30:26.554782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.210 [2024-10-06 11:30:26.554789] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.210 [2024-10-06 11:30:26.554803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.210 qpair failed and we were unable to recover it. 00:35:29.210 [2024-10-06 11:30:26.564711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.210 [2024-10-06 11:30:26.564775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.210 [2024-10-06 11:30:26.564792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.210 [2024-10-06 11:30:26.564799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.210 [2024-10-06 11:30:26.564805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.210 [2024-10-06 11:30:26.564820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.210 qpair failed and we were unable to recover it. 00:35:29.210 [2024-10-06 11:30:26.574780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.210 [2024-10-06 11:30:26.574845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.210 [2024-10-06 11:30:26.574860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.210 [2024-10-06 11:30:26.574866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.210 [2024-10-06 11:30:26.574872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.210 [2024-10-06 11:30:26.574887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.210 qpair failed and we were unable to recover it. 00:35:29.210 [2024-10-06 11:30:26.584804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.210 [2024-10-06 11:30:26.584872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.210 [2024-10-06 11:30:26.584887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.210 [2024-10-06 11:30:26.584893] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.210 [2024-10-06 11:30:26.584900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.210 [2024-10-06 11:30:26.584914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.210 qpair failed and we were unable to recover it. 00:35:29.210 [2024-10-06 11:30:26.594837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.210 [2024-10-06 11:30:26.594918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.210 [2024-10-06 11:30:26.594933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.210 [2024-10-06 11:30:26.594939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.210 [2024-10-06 11:30:26.594945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.210 [2024-10-06 11:30:26.594960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.210 qpair failed and we were unable to recover it. 00:35:29.210 [2024-10-06 11:30:26.604838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.210 [2024-10-06 11:30:26.604901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.210 [2024-10-06 11:30:26.604915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.210 [2024-10-06 11:30:26.604922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.211 [2024-10-06 11:30:26.604928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.211 [2024-10-06 11:30:26.604946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.211 qpair failed and we were unable to recover it. 00:35:29.211 [2024-10-06 11:30:26.614934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.211 [2024-10-06 11:30:26.614994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.211 [2024-10-06 11:30:26.615008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.211 [2024-10-06 11:30:26.615015] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.211 [2024-10-06 11:30:26.615021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.211 [2024-10-06 11:30:26.615035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.211 qpair failed and we were unable to recover it. 00:35:29.211 [2024-10-06 11:30:26.624921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.211 [2024-10-06 11:30:26.624991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.211 [2024-10-06 11:30:26.625006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.211 [2024-10-06 11:30:26.625012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.211 [2024-10-06 11:30:26.625018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.211 [2024-10-06 11:30:26.625033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.211 qpair failed and we were unable to recover it. 00:35:29.211 [2024-10-06 11:30:26.634966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.211 [2024-10-06 11:30:26.635031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.211 [2024-10-06 11:30:26.635045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.211 [2024-10-06 11:30:26.635052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.211 [2024-10-06 11:30:26.635061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.211 [2024-10-06 11:30:26.635076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.211 qpair failed and we were unable to recover it. 00:35:29.211 [2024-10-06 11:30:26.644957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.211 [2024-10-06 11:30:26.645024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.211 [2024-10-06 11:30:26.645039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.211 [2024-10-06 11:30:26.645045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.211 [2024-10-06 11:30:26.645051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.211 [2024-10-06 11:30:26.645068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.211 qpair failed and we were unable to recover it. 00:35:29.211 [2024-10-06 11:30:26.654947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.211 [2024-10-06 11:30:26.655019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.211 [2024-10-06 11:30:26.655036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.211 [2024-10-06 11:30:26.655042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.211 [2024-10-06 11:30:26.655048] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.211 [2024-10-06 11:30:26.655066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.211 qpair failed and we were unable to recover it. 00:35:29.211 [2024-10-06 11:30:26.665054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.211 [2024-10-06 11:30:26.665128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.211 [2024-10-06 11:30:26.665142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.211 [2024-10-06 11:30:26.665149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.211 [2024-10-06 11:30:26.665154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.211 [2024-10-06 11:30:26.665169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.211 qpair failed and we were unable to recover it. 00:35:29.211 [2024-10-06 11:30:26.675103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.211 [2024-10-06 11:30:26.675170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.211 [2024-10-06 11:30:26.675186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.211 [2024-10-06 11:30:26.675192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.211 [2024-10-06 11:30:26.675198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.211 [2024-10-06 11:30:26.675214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.211 qpair failed and we were unable to recover it. 00:35:29.211 [2024-10-06 11:30:26.685087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.211 [2024-10-06 11:30:26.685152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.211 [2024-10-06 11:30:26.685166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.211 [2024-10-06 11:30:26.685173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.211 [2024-10-06 11:30:26.685179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.211 [2024-10-06 11:30:26.685194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.211 qpair failed and we were unable to recover it. 00:35:29.211 [2024-10-06 11:30:26.695103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.211 [2024-10-06 11:30:26.695167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.211 [2024-10-06 11:30:26.695182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.211 [2024-10-06 11:30:26.695188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.211 [2024-10-06 11:30:26.695197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.211 [2024-10-06 11:30:26.695212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.211 qpair failed and we were unable to recover it. 00:35:29.211 [2024-10-06 11:30:26.705157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.211 [2024-10-06 11:30:26.705226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.211 [2024-10-06 11:30:26.705240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.211 [2024-10-06 11:30:26.705247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.211 [2024-10-06 11:30:26.705252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.211 [2024-10-06 11:30:26.705267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.211 qpair failed and we were unable to recover it. 00:35:29.211 [2024-10-06 11:30:26.715191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.211 [2024-10-06 11:30:26.715258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.211 [2024-10-06 11:30:26.715273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.211 [2024-10-06 11:30:26.715279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.211 [2024-10-06 11:30:26.715285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.211 [2024-10-06 11:30:26.715299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.211 qpair failed and we were unable to recover it. 00:35:29.211 [2024-10-06 11:30:26.725204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.211 [2024-10-06 11:30:26.725272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.211 [2024-10-06 11:30:26.725288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.211 [2024-10-06 11:30:26.725294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.211 [2024-10-06 11:30:26.725300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.211 [2024-10-06 11:30:26.725314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.211 qpair failed and we were unable to recover it. 00:35:29.211 [2024-10-06 11:30:26.735223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.211 [2024-10-06 11:30:26.735302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.211 [2024-10-06 11:30:26.735316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.212 [2024-10-06 11:30:26.735322] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.212 [2024-10-06 11:30:26.735327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.212 [2024-10-06 11:30:26.735342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.212 qpair failed and we were unable to recover it. 00:35:29.212 [2024-10-06 11:30:26.745274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.212 [2024-10-06 11:30:26.745345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.212 [2024-10-06 11:30:26.745360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.212 [2024-10-06 11:30:26.745366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.212 [2024-10-06 11:30:26.745372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.212 [2024-10-06 11:30:26.745386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.212 qpair failed and we were unable to recover it. 00:35:29.212 [2024-10-06 11:30:26.755280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.212 [2024-10-06 11:30:26.755344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.212 [2024-10-06 11:30:26.755359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.212 [2024-10-06 11:30:26.755366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.212 [2024-10-06 11:30:26.755372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.212 [2024-10-06 11:30:26.755386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.212 qpair failed and we were unable to recover it. 00:35:29.212 [2024-10-06 11:30:26.765332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.212 [2024-10-06 11:30:26.765440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.212 [2024-10-06 11:30:26.765455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.212 [2024-10-06 11:30:26.765461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.212 [2024-10-06 11:30:26.765467] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.212 [2024-10-06 11:30:26.765481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.212 qpair failed and we were unable to recover it. 00:35:29.212 [2024-10-06 11:30:26.775354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.212 [2024-10-06 11:30:26.775422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.212 [2024-10-06 11:30:26.775436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.212 [2024-10-06 11:30:26.775443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.212 [2024-10-06 11:30:26.775449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.212 [2024-10-06 11:30:26.775463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.212 qpair failed and we were unable to recover it. 00:35:29.473 [2024-10-06 11:30:26.785362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.473 [2024-10-06 11:30:26.785427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.473 [2024-10-06 11:30:26.785441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.473 [2024-10-06 11:30:26.785448] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.473 [2024-10-06 11:30:26.785457] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.473 [2024-10-06 11:30:26.785472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.473 qpair failed and we were unable to recover it. 00:35:29.473 [2024-10-06 11:30:26.795378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.473 [2024-10-06 11:30:26.795442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.473 [2024-10-06 11:30:26.795457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.473 [2024-10-06 11:30:26.795464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.473 [2024-10-06 11:30:26.795470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.473 [2024-10-06 11:30:26.795485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.473 qpair failed and we were unable to recover it. 00:35:29.473 [2024-10-06 11:30:26.805424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.473 [2024-10-06 11:30:26.805488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.473 [2024-10-06 11:30:26.805503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.473 [2024-10-06 11:30:26.805509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.473 [2024-10-06 11:30:26.805515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.473 [2024-10-06 11:30:26.805529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.473 qpair failed and we were unable to recover it. 00:35:29.473 [2024-10-06 11:30:26.815495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.473 [2024-10-06 11:30:26.815561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.473 [2024-10-06 11:30:26.815575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.473 [2024-10-06 11:30:26.815582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.473 [2024-10-06 11:30:26.815588] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.473 [2024-10-06 11:30:26.815602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.473 qpair failed and we were unable to recover it. 00:35:29.473 [2024-10-06 11:30:26.825497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.473 [2024-10-06 11:30:26.825567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.473 [2024-10-06 11:30:26.825581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.473 [2024-10-06 11:30:26.825588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.473 [2024-10-06 11:30:26.825594] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.473 [2024-10-06 11:30:26.825609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.473 qpair failed and we were unable to recover it. 00:35:29.473 [2024-10-06 11:30:26.835491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.473 [2024-10-06 11:30:26.835560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.473 [2024-10-06 11:30:26.835574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.473 [2024-10-06 11:30:26.835581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.473 [2024-10-06 11:30:26.835587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.473 [2024-10-06 11:30:26.835601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.473 qpair failed and we were unable to recover it. 00:35:29.473 [2024-10-06 11:30:26.845573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.473 [2024-10-06 11:30:26.845637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.473 [2024-10-06 11:30:26.845651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.473 [2024-10-06 11:30:26.845658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.473 [2024-10-06 11:30:26.845663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.473 [2024-10-06 11:30:26.845678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.473 qpair failed and we were unable to recover it. 00:35:29.473 [2024-10-06 11:30:26.855578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.473 [2024-10-06 11:30:26.855651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.473 [2024-10-06 11:30:26.855667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.473 [2024-10-06 11:30:26.855674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.473 [2024-10-06 11:30:26.855680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.473 [2024-10-06 11:30:26.855695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.473 qpair failed and we were unable to recover it. 00:35:29.473 [2024-10-06 11:30:26.865644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.473 [2024-10-06 11:30:26.865714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.473 [2024-10-06 11:30:26.865729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.473 [2024-10-06 11:30:26.865736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.473 [2024-10-06 11:30:26.865742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.473 [2024-10-06 11:30:26.865757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.473 qpair failed and we were unable to recover it. 00:35:29.473 [2024-10-06 11:30:26.875669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.473 [2024-10-06 11:30:26.875736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.473 [2024-10-06 11:30:26.875752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.473 [2024-10-06 11:30:26.875762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.473 [2024-10-06 11:30:26.875768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.473 [2024-10-06 11:30:26.875784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.473 qpair failed and we were unable to recover it. 00:35:29.473 [2024-10-06 11:30:26.885689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.473 [2024-10-06 11:30:26.885768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.473 [2024-10-06 11:30:26.885782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.473 [2024-10-06 11:30:26.885789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.473 [2024-10-06 11:30:26.885795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.473 [2024-10-06 11:30:26.885810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.473 qpair failed and we were unable to recover it. 00:35:29.473 [2024-10-06 11:30:26.895665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.473 [2024-10-06 11:30:26.895732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.473 [2024-10-06 11:30:26.895747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.474 [2024-10-06 11:30:26.895753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.474 [2024-10-06 11:30:26.895759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.474 [2024-10-06 11:30:26.895773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.474 qpair failed and we were unable to recover it. 00:35:29.474 [2024-10-06 11:30:26.905741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.474 [2024-10-06 11:30:26.905813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.474 [2024-10-06 11:30:26.905828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.474 [2024-10-06 11:30:26.905835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.474 [2024-10-06 11:30:26.905841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.474 [2024-10-06 11:30:26.905856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.474 qpair failed and we were unable to recover it. 00:35:29.474 [2024-10-06 11:30:26.915836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.474 [2024-10-06 11:30:26.915908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.474 [2024-10-06 11:30:26.915924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.474 [2024-10-06 11:30:26.915931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.474 [2024-10-06 11:30:26.915937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.474 [2024-10-06 11:30:26.915953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.474 qpair failed and we were unable to recover it. 00:35:29.474 [2024-10-06 11:30:26.925778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.474 [2024-10-06 11:30:26.925840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.474 [2024-10-06 11:30:26.925855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.474 [2024-10-06 11:30:26.925862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.474 [2024-10-06 11:30:26.925868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.474 [2024-10-06 11:30:26.925883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.474 qpair failed and we were unable to recover it. 00:35:29.474 [2024-10-06 11:30:26.935781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.474 [2024-10-06 11:30:26.935865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.474 [2024-10-06 11:30:26.935880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.474 [2024-10-06 11:30:26.935886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.474 [2024-10-06 11:30:26.935892] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.474 [2024-10-06 11:30:26.935907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.474 qpair failed and we were unable to recover it. 00:35:29.474 [2024-10-06 11:30:26.945890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.474 [2024-10-06 11:30:26.945987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.474 [2024-10-06 11:30:26.946002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.474 [2024-10-06 11:30:26.946009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.474 [2024-10-06 11:30:26.946014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.474 [2024-10-06 11:30:26.946030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.474 qpair failed and we were unable to recover it. 00:35:29.474 [2024-10-06 11:30:26.955883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.474 [2024-10-06 11:30:26.955948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.474 [2024-10-06 11:30:26.955962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.474 [2024-10-06 11:30:26.955969] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.474 [2024-10-06 11:30:26.955975] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.474 [2024-10-06 11:30:26.955990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.474 qpair failed and we were unable to recover it. 00:35:29.474 [2024-10-06 11:30:26.965920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.474 [2024-10-06 11:30:26.965982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.474 [2024-10-06 11:30:26.965997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.474 [2024-10-06 11:30:26.966007] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.474 [2024-10-06 11:30:26.966013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.474 [2024-10-06 11:30:26.966028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.474 qpair failed and we were unable to recover it. 00:35:29.474 [2024-10-06 11:30:26.975915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.474 [2024-10-06 11:30:26.975976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.474 [2024-10-06 11:30:26.975990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.474 [2024-10-06 11:30:26.975996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.474 [2024-10-06 11:30:26.976002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.474 [2024-10-06 11:30:26.976017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.474 qpair failed and we were unable to recover it. 00:35:29.474 [2024-10-06 11:30:26.985956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.474 [2024-10-06 11:30:26.986022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.474 [2024-10-06 11:30:26.986037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.474 [2024-10-06 11:30:26.986043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.474 [2024-10-06 11:30:26.986049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.474 [2024-10-06 11:30:26.986067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.474 qpair failed and we were unable to recover it. 00:35:29.474 [2024-10-06 11:30:26.995996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.474 [2024-10-06 11:30:26.996107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.474 [2024-10-06 11:30:26.996121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.474 [2024-10-06 11:30:26.996128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.474 [2024-10-06 11:30:26.996134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.474 [2024-10-06 11:30:26.996149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.474 qpair failed and we were unable to recover it. 00:35:29.474 [2024-10-06 11:30:27.006007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.474 [2024-10-06 11:30:27.006109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.474 [2024-10-06 11:30:27.006124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.474 [2024-10-06 11:30:27.006130] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.474 [2024-10-06 11:30:27.006136] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.474 [2024-10-06 11:30:27.006151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.474 qpair failed and we were unable to recover it. 00:35:29.474 [2024-10-06 11:30:27.016031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.474 [2024-10-06 11:30:27.016103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.474 [2024-10-06 11:30:27.016117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.474 [2024-10-06 11:30:27.016124] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.474 [2024-10-06 11:30:27.016130] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.474 [2024-10-06 11:30:27.016144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.474 qpair failed and we were unable to recover it. 00:35:29.474 [2024-10-06 11:30:27.026090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.474 [2024-10-06 11:30:27.026157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.475 [2024-10-06 11:30:27.026173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.475 [2024-10-06 11:30:27.026182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.475 [2024-10-06 11:30:27.026189] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.475 [2024-10-06 11:30:27.026206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.475 qpair failed and we were unable to recover it. 00:35:29.475 [2024-10-06 11:30:27.036078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.475 [2024-10-06 11:30:27.036144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.475 [2024-10-06 11:30:27.036159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.475 [2024-10-06 11:30:27.036165] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.475 [2024-10-06 11:30:27.036171] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.475 [2024-10-06 11:30:27.036187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.475 qpair failed and we were unable to recover it. 00:35:29.475 [2024-10-06 11:30:27.046202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.475 [2024-10-06 11:30:27.046293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.475 [2024-10-06 11:30:27.046307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.475 [2024-10-06 11:30:27.046313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.736 [2024-10-06 11:30:27.046320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.736 [2024-10-06 11:30:27.046337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.736 qpair failed and we were unable to recover it. 00:35:29.736 [2024-10-06 11:30:27.056180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.736 [2024-10-06 11:30:27.056283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.736 [2024-10-06 11:30:27.056301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.736 [2024-10-06 11:30:27.056308] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.736 [2024-10-06 11:30:27.056314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.736 [2024-10-06 11:30:27.056331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.736 qpair failed and we were unable to recover it. 00:35:29.736 [2024-10-06 11:30:27.066168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.736 [2024-10-06 11:30:27.066239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.736 [2024-10-06 11:30:27.066254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.736 [2024-10-06 11:30:27.066260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.736 [2024-10-06 11:30:27.066266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.736 [2024-10-06 11:30:27.066281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.736 qpair failed and we were unable to recover it. 00:35:29.736 [2024-10-06 11:30:27.076212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.736 [2024-10-06 11:30:27.076279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.736 [2024-10-06 11:30:27.076293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.736 [2024-10-06 11:30:27.076300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.736 [2024-10-06 11:30:27.076305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.736 [2024-10-06 11:30:27.076320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.736 qpair failed and we were unable to recover it. 00:35:29.736 [2024-10-06 11:30:27.086261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.736 [2024-10-06 11:30:27.086329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.736 [2024-10-06 11:30:27.086343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.736 [2024-10-06 11:30:27.086350] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.736 [2024-10-06 11:30:27.086355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.736 [2024-10-06 11:30:27.086370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.736 qpair failed and we were unable to recover it. 00:35:29.736 [2024-10-06 11:30:27.096267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.736 [2024-10-06 11:30:27.096330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.736 [2024-10-06 11:30:27.096344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.736 [2024-10-06 11:30:27.096351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.736 [2024-10-06 11:30:27.096357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.736 [2024-10-06 11:30:27.096374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.736 qpair failed and we were unable to recover it. 00:35:29.736 [2024-10-06 11:30:27.106308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.736 [2024-10-06 11:30:27.106393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.736 [2024-10-06 11:30:27.106408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.736 [2024-10-06 11:30:27.106414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.737 [2024-10-06 11:30:27.106420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.737 [2024-10-06 11:30:27.106434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.737 qpair failed and we were unable to recover it. 00:35:29.737 [2024-10-06 11:30:27.116343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.737 [2024-10-06 11:30:27.116423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.737 [2024-10-06 11:30:27.116438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.737 [2024-10-06 11:30:27.116445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.737 [2024-10-06 11:30:27.116450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.737 [2024-10-06 11:30:27.116465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.737 qpair failed and we were unable to recover it. 00:35:29.737 [2024-10-06 11:30:27.126311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.737 [2024-10-06 11:30:27.126374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.737 [2024-10-06 11:30:27.126389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.737 [2024-10-06 11:30:27.126395] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.737 [2024-10-06 11:30:27.126401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.737 [2024-10-06 11:30:27.126416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.737 qpair failed and we were unable to recover it. 00:35:29.737 [2024-10-06 11:30:27.136413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.737 [2024-10-06 11:30:27.136523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.737 [2024-10-06 11:30:27.136537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.737 [2024-10-06 11:30:27.136543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.737 [2024-10-06 11:30:27.136549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.737 [2024-10-06 11:30:27.136564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.737 qpair failed and we were unable to recover it. 00:35:29.737 [2024-10-06 11:30:27.146418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.737 [2024-10-06 11:30:27.146485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.737 [2024-10-06 11:30:27.146503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.737 [2024-10-06 11:30:27.146510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.737 [2024-10-06 11:30:27.146516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.737 [2024-10-06 11:30:27.146531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.737 qpair failed and we were unable to recover it. 00:35:29.737 [2024-10-06 11:30:27.156382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.737 [2024-10-06 11:30:27.156448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.737 [2024-10-06 11:30:27.156461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.737 [2024-10-06 11:30:27.156468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.737 [2024-10-06 11:30:27.156474] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.737 [2024-10-06 11:30:27.156488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.737 qpair failed and we were unable to recover it. 00:35:29.737 [2024-10-06 11:30:27.166464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.737 [2024-10-06 11:30:27.166531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.737 [2024-10-06 11:30:27.166545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.737 [2024-10-06 11:30:27.166552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.737 [2024-10-06 11:30:27.166557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.737 [2024-10-06 11:30:27.166572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.737 qpair failed and we were unable to recover it. 00:35:29.737 [2024-10-06 11:30:27.176508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.737 [2024-10-06 11:30:27.176574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.737 [2024-10-06 11:30:27.176588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.737 [2024-10-06 11:30:27.176595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.737 [2024-10-06 11:30:27.176600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.737 [2024-10-06 11:30:27.176615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.737 qpair failed and we were unable to recover it. 00:35:29.737 [2024-10-06 11:30:27.186554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.737 [2024-10-06 11:30:27.186623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.737 [2024-10-06 11:30:27.186638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.737 [2024-10-06 11:30:27.186644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.737 [2024-10-06 11:30:27.186650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.737 [2024-10-06 11:30:27.186668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.737 qpair failed and we were unable to recover it. 00:35:29.737 [2024-10-06 11:30:27.196548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.737 [2024-10-06 11:30:27.196610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.737 [2024-10-06 11:30:27.196624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.737 [2024-10-06 11:30:27.196631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.737 [2024-10-06 11:30:27.196637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.737 [2024-10-06 11:30:27.196652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.737 qpair failed and we were unable to recover it. 00:35:29.737 [2024-10-06 11:30:27.206542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.737 [2024-10-06 11:30:27.206607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.737 [2024-10-06 11:30:27.206621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.737 [2024-10-06 11:30:27.206628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.737 [2024-10-06 11:30:27.206633] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.737 [2024-10-06 11:30:27.206648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.737 qpair failed and we were unable to recover it. 00:35:29.737 [2024-10-06 11:30:27.216588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.737 [2024-10-06 11:30:27.216653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.737 [2024-10-06 11:30:27.216668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.737 [2024-10-06 11:30:27.216675] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.737 [2024-10-06 11:30:27.216680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.737 [2024-10-06 11:30:27.216695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.737 qpair failed and we were unable to recover it. 00:35:29.737 [2024-10-06 11:30:27.226706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.737 [2024-10-06 11:30:27.226772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.737 [2024-10-06 11:30:27.226786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.737 [2024-10-06 11:30:27.226793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.737 [2024-10-06 11:30:27.226799] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.737 [2024-10-06 11:30:27.226814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.737 qpair failed and we were unable to recover it. 00:35:29.737 [2024-10-06 11:30:27.236673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.737 [2024-10-06 11:30:27.236738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.737 [2024-10-06 11:30:27.236755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.738 [2024-10-06 11:30:27.236762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.738 [2024-10-06 11:30:27.236768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.738 [2024-10-06 11:30:27.236783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.738 qpair failed and we were unable to recover it. 00:35:29.738 [2024-10-06 11:30:27.246703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.738 [2024-10-06 11:30:27.246763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.738 [2024-10-06 11:30:27.246778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.738 [2024-10-06 11:30:27.246784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.738 [2024-10-06 11:30:27.246791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.738 [2024-10-06 11:30:27.246805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.738 qpair failed and we were unable to recover it. 00:35:29.738 [2024-10-06 11:30:27.256802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.738 [2024-10-06 11:30:27.256898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.738 [2024-10-06 11:30:27.256912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.738 [2024-10-06 11:30:27.256919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.738 [2024-10-06 11:30:27.256925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.738 [2024-10-06 11:30:27.256939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.738 qpair failed and we were unable to recover it. 00:35:29.738 [2024-10-06 11:30:27.266856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.738 [2024-10-06 11:30:27.266922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.738 [2024-10-06 11:30:27.266937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.738 [2024-10-06 11:30:27.266944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.738 [2024-10-06 11:30:27.266950] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.738 [2024-10-06 11:30:27.266965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.738 qpair failed and we were unable to recover it. 00:35:29.738 [2024-10-06 11:30:27.276728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.738 [2024-10-06 11:30:27.276797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.738 [2024-10-06 11:30:27.276812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.738 [2024-10-06 11:30:27.276819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.738 [2024-10-06 11:30:27.276827] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.738 [2024-10-06 11:30:27.276842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.738 qpair failed and we were unable to recover it. 00:35:29.738 [2024-10-06 11:30:27.286764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.738 [2024-10-06 11:30:27.286876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.738 [2024-10-06 11:30:27.286898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.738 [2024-10-06 11:30:27.286905] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.738 [2024-10-06 11:30:27.286910] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.738 [2024-10-06 11:30:27.286926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.738 qpair failed and we were unable to recover it. 00:35:29.738 [2024-10-06 11:30:27.296893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.738 [2024-10-06 11:30:27.296969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.738 [2024-10-06 11:30:27.296983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.738 [2024-10-06 11:30:27.296989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.738 [2024-10-06 11:30:27.296995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.738 [2024-10-06 11:30:27.297009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.738 qpair failed and we were unable to recover it. 00:35:29.738 [2024-10-06 11:30:27.306879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.738 [2024-10-06 11:30:27.306947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.738 [2024-10-06 11:30:27.306961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.738 [2024-10-06 11:30:27.306968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.738 [2024-10-06 11:30:27.306973] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.738 [2024-10-06 11:30:27.306988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.738 qpair failed and we were unable to recover it. 00:35:29.999 [2024-10-06 11:30:27.316900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.999 [2024-10-06 11:30:27.316964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.999 [2024-10-06 11:30:27.316979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.999 [2024-10-06 11:30:27.316985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.999 [2024-10-06 11:30:27.316992] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.999 [2024-10-06 11:30:27.317007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.999 qpair failed and we were unable to recover it. 00:35:29.999 [2024-10-06 11:30:27.326950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.999 [2024-10-06 11:30:27.327028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.999 [2024-10-06 11:30:27.327042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.999 [2024-10-06 11:30:27.327049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.999 [2024-10-06 11:30:27.327055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.999 [2024-10-06 11:30:27.327074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.999 qpair failed and we were unable to recover it. 00:35:29.999 [2024-10-06 11:30:27.336959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.999 [2024-10-06 11:30:27.337020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.999 [2024-10-06 11:30:27.337034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.999 [2024-10-06 11:30:27.337041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.999 [2024-10-06 11:30:27.337047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.999 [2024-10-06 11:30:27.337065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.999 qpair failed and we were unable to recover it. 00:35:29.999 [2024-10-06 11:30:27.346990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.999 [2024-10-06 11:30:27.347056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.999 [2024-10-06 11:30:27.347075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.999 [2024-10-06 11:30:27.347081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.999 [2024-10-06 11:30:27.347087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.999 [2024-10-06 11:30:27.347102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.999 qpair failed and we were unable to recover it. 00:35:29.999 [2024-10-06 11:30:27.356961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.999 [2024-10-06 11:30:27.357066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.999 [2024-10-06 11:30:27.357080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.999 [2024-10-06 11:30:27.357086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.999 [2024-10-06 11:30:27.357092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.999 [2024-10-06 11:30:27.357107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.999 qpair failed and we were unable to recover it. 00:35:29.999 [2024-10-06 11:30:27.367071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.999 [2024-10-06 11:30:27.367138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.999 [2024-10-06 11:30:27.367153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.999 [2024-10-06 11:30:27.367162] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.999 [2024-10-06 11:30:27.367168] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.999 [2024-10-06 11:30:27.367183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.999 qpair failed and we were unable to recover it. 00:35:29.999 [2024-10-06 11:30:27.377088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:29.999 [2024-10-06 11:30:27.377154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:29.999 [2024-10-06 11:30:27.377169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:29.999 [2024-10-06 11:30:27.377176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:29.999 [2024-10-06 11:30:27.377182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:29.999 [2024-10-06 11:30:27.377196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.999 qpair failed and we were unable to recover it. 00:35:29.999 [2024-10-06 11:30:27.387122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.000 [2024-10-06 11:30:27.387188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.000 [2024-10-06 11:30:27.387203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.000 [2024-10-06 11:30:27.387210] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.000 [2024-10-06 11:30:27.387215] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.000 [2024-10-06 11:30:27.387230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.000 qpair failed and we were unable to recover it. 00:35:30.000 [2024-10-06 11:30:27.397167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.000 [2024-10-06 11:30:27.397250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.000 [2024-10-06 11:30:27.397264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.000 [2024-10-06 11:30:27.397271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.000 [2024-10-06 11:30:27.397277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.000 [2024-10-06 11:30:27.397292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.000 qpair failed and we were unable to recover it. 00:35:30.000 [2024-10-06 11:30:27.407123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.000 [2024-10-06 11:30:27.407188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.000 [2024-10-06 11:30:27.407202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.000 [2024-10-06 11:30:27.407209] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.000 [2024-10-06 11:30:27.407215] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.000 [2024-10-06 11:30:27.407230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.000 qpair failed and we were unable to recover it. 00:35:30.000 [2024-10-06 11:30:27.417208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.000 [2024-10-06 11:30:27.417280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.000 [2024-10-06 11:30:27.417295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.000 [2024-10-06 11:30:27.417301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.000 [2024-10-06 11:30:27.417307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.000 [2024-10-06 11:30:27.417321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.000 qpair failed and we were unable to recover it. 00:35:30.000 [2024-10-06 11:30:27.427199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.000 [2024-10-06 11:30:27.427266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.000 [2024-10-06 11:30:27.427281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.000 [2024-10-06 11:30:27.427287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.000 [2024-10-06 11:30:27.427293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.000 [2024-10-06 11:30:27.427308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.000 qpair failed and we were unable to recover it. 00:35:30.000 [2024-10-06 11:30:27.437254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.000 [2024-10-06 11:30:27.437323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.000 [2024-10-06 11:30:27.437338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.000 [2024-10-06 11:30:27.437344] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.000 [2024-10-06 11:30:27.437349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.000 [2024-10-06 11:30:27.437364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.000 qpair failed and we were unable to recover it. 00:35:30.000 [2024-10-06 11:30:27.447240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.000 [2024-10-06 11:30:27.447307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.000 [2024-10-06 11:30:27.447321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.000 [2024-10-06 11:30:27.447328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.000 [2024-10-06 11:30:27.447333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.000 [2024-10-06 11:30:27.447348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.000 qpair failed and we were unable to recover it. 00:35:30.000 [2024-10-06 11:30:27.457329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.000 [2024-10-06 11:30:27.457405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.000 [2024-10-06 11:30:27.457420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.000 [2024-10-06 11:30:27.457429] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.000 [2024-10-06 11:30:27.457435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.000 [2024-10-06 11:30:27.457449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.000 qpair failed and we were unable to recover it. 00:35:30.000 [2024-10-06 11:30:27.467329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.000 [2024-10-06 11:30:27.467399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.000 [2024-10-06 11:30:27.467413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.000 [2024-10-06 11:30:27.467419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.000 [2024-10-06 11:30:27.467425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.000 [2024-10-06 11:30:27.467440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.000 qpair failed and we were unable to recover it. 00:35:30.000 [2024-10-06 11:30:27.477375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.000 [2024-10-06 11:30:27.477444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.000 [2024-10-06 11:30:27.477458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.000 [2024-10-06 11:30:27.477464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.000 [2024-10-06 11:30:27.477470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.000 [2024-10-06 11:30:27.477485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.000 qpair failed and we were unable to recover it. 00:35:30.000 [2024-10-06 11:30:27.487348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.000 [2024-10-06 11:30:27.487419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.000 [2024-10-06 11:30:27.487433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.000 [2024-10-06 11:30:27.487439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.000 [2024-10-06 11:30:27.487445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.000 [2024-10-06 11:30:27.487460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.000 qpair failed and we were unable to recover it. 00:35:30.000 [2024-10-06 11:30:27.497379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.000 [2024-10-06 11:30:27.497445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.000 [2024-10-06 11:30:27.497459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.000 [2024-10-06 11:30:27.497465] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.000 [2024-10-06 11:30:27.497471] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.000 [2024-10-06 11:30:27.497485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.000 qpair failed and we were unable to recover it. 00:35:30.000 [2024-10-06 11:30:27.507469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.000 [2024-10-06 11:30:27.507536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.000 [2024-10-06 11:30:27.507550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.000 [2024-10-06 11:30:27.507556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.000 [2024-10-06 11:30:27.507562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.001 [2024-10-06 11:30:27.507576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.001 qpair failed and we were unable to recover it. 00:35:30.001 [2024-10-06 11:30:27.517482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.001 [2024-10-06 11:30:27.517590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.001 [2024-10-06 11:30:27.517606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.001 [2024-10-06 11:30:27.517613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.001 [2024-10-06 11:30:27.517619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.001 [2024-10-06 11:30:27.517634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.001 qpair failed and we were unable to recover it. 00:35:30.001 [2024-10-06 11:30:27.527538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.001 [2024-10-06 11:30:27.527606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.001 [2024-10-06 11:30:27.527622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.001 [2024-10-06 11:30:27.527628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.001 [2024-10-06 11:30:27.527635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.001 [2024-10-06 11:30:27.527649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.001 qpair failed and we were unable to recover it. 00:35:30.001 [2024-10-06 11:30:27.537537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.001 [2024-10-06 11:30:27.537603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.001 [2024-10-06 11:30:27.537618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.001 [2024-10-06 11:30:27.537624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.001 [2024-10-06 11:30:27.537631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.001 [2024-10-06 11:30:27.537647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.001 qpair failed and we were unable to recover it. 00:35:30.001 [2024-10-06 11:30:27.547525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.001 [2024-10-06 11:30:27.547603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.001 [2024-10-06 11:30:27.547623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.001 [2024-10-06 11:30:27.547630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.001 [2024-10-06 11:30:27.547636] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.001 [2024-10-06 11:30:27.547651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.001 qpair failed and we were unable to recover it. 00:35:30.001 [2024-10-06 11:30:27.557571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.001 [2024-10-06 11:30:27.557637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.001 [2024-10-06 11:30:27.557651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.001 [2024-10-06 11:30:27.557658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.001 [2024-10-06 11:30:27.557664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.001 [2024-10-06 11:30:27.557679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.001 qpair failed and we were unable to recover it. 00:35:30.001 [2024-10-06 11:30:27.567653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.001 [2024-10-06 11:30:27.567717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.001 [2024-10-06 11:30:27.567731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.001 [2024-10-06 11:30:27.567738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.001 [2024-10-06 11:30:27.567744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.001 [2024-10-06 11:30:27.567758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.001 qpair failed and we were unable to recover it. 00:35:30.262 [2024-10-06 11:30:27.577643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.262 [2024-10-06 11:30:27.577707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.262 [2024-10-06 11:30:27.577722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.262 [2024-10-06 11:30:27.577729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.262 [2024-10-06 11:30:27.577735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.262 [2024-10-06 11:30:27.577750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.262 qpair failed and we were unable to recover it. 00:35:30.262 [2024-10-06 11:30:27.587761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.262 [2024-10-06 11:30:27.587859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.262 [2024-10-06 11:30:27.587873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.262 [2024-10-06 11:30:27.587880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.262 [2024-10-06 11:30:27.587886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.262 [2024-10-06 11:30:27.587905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.262 qpair failed and we were unable to recover it. 00:35:30.262 [2024-10-06 11:30:27.597756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.262 [2024-10-06 11:30:27.597817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.262 [2024-10-06 11:30:27.597831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.262 [2024-10-06 11:30:27.597838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.262 [2024-10-06 11:30:27.597844] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.262 [2024-10-06 11:30:27.597858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.262 qpair failed and we were unable to recover it. 00:35:30.262 [2024-10-06 11:30:27.607749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.262 [2024-10-06 11:30:27.607819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.262 [2024-10-06 11:30:27.607834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.262 [2024-10-06 11:30:27.607841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.262 [2024-10-06 11:30:27.607847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.262 [2024-10-06 11:30:27.607862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.262 qpair failed and we were unable to recover it. 00:35:30.262 [2024-10-06 11:30:27.617734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.262 [2024-10-06 11:30:27.617796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.262 [2024-10-06 11:30:27.617810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.262 [2024-10-06 11:30:27.617817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.262 [2024-10-06 11:30:27.617822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.262 [2024-10-06 11:30:27.617838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.262 qpair failed and we were unable to recover it. 00:35:30.262 [2024-10-06 11:30:27.627811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.262 [2024-10-06 11:30:27.627879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.262 [2024-10-06 11:30:27.627893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.262 [2024-10-06 11:30:27.627899] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.262 [2024-10-06 11:30:27.627905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.262 [2024-10-06 11:30:27.627920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.262 qpair failed and we were unable to recover it. 00:35:30.262 [2024-10-06 11:30:27.637838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.262 [2024-10-06 11:30:27.637952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.262 [2024-10-06 11:30:27.637976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.262 [2024-10-06 11:30:27.637982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.262 [2024-10-06 11:30:27.637989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.262 [2024-10-06 11:30:27.638004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.262 qpair failed and we were unable to recover it. 00:35:30.262 [2024-10-06 11:30:27.647912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.262 [2024-10-06 11:30:27.647971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.262 [2024-10-06 11:30:27.647986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.262 [2024-10-06 11:30:27.647993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.262 [2024-10-06 11:30:27.647998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.262 [2024-10-06 11:30:27.648014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.262 qpair failed and we were unable to recover it. 00:35:30.262 [2024-10-06 11:30:27.657896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.262 [2024-10-06 11:30:27.657970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.262 [2024-10-06 11:30:27.657985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.262 [2024-10-06 11:30:27.657991] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.262 [2024-10-06 11:30:27.657997] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.262 [2024-10-06 11:30:27.658012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.263 qpair failed and we were unable to recover it. 00:35:30.263 [2024-10-06 11:30:27.667899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.263 [2024-10-06 11:30:27.667967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.263 [2024-10-06 11:30:27.667981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.263 [2024-10-06 11:30:27.667988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.263 [2024-10-06 11:30:27.667994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.263 [2024-10-06 11:30:27.668008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.263 qpair failed and we were unable to recover it. 00:35:30.263 [2024-10-06 11:30:27.677918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.263 [2024-10-06 11:30:27.677986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.263 [2024-10-06 11:30:27.678000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.263 [2024-10-06 11:30:27.678007] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.263 [2024-10-06 11:30:27.678013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.263 [2024-10-06 11:30:27.678031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.263 qpair failed and we were unable to recover it. 00:35:30.263 [2024-10-06 11:30:27.687967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.263 [2024-10-06 11:30:27.688031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.263 [2024-10-06 11:30:27.688046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.263 [2024-10-06 11:30:27.688052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.263 [2024-10-06 11:30:27.688061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.263 [2024-10-06 11:30:27.688077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.263 qpair failed and we were unable to recover it. 00:35:30.263 [2024-10-06 11:30:27.698018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.263 [2024-10-06 11:30:27.698083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.263 [2024-10-06 11:30:27.698097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.263 [2024-10-06 11:30:27.698104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.263 [2024-10-06 11:30:27.698110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.263 [2024-10-06 11:30:27.698124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.263 qpair failed and we were unable to recover it. 00:35:30.263 [2024-10-06 11:30:27.708042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.263 [2024-10-06 11:30:27.708130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.263 [2024-10-06 11:30:27.708144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.263 [2024-10-06 11:30:27.708151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.263 [2024-10-06 11:30:27.708157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.263 [2024-10-06 11:30:27.708171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.263 qpair failed and we were unable to recover it. 00:35:30.263 [2024-10-06 11:30:27.718048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.263 [2024-10-06 11:30:27.718119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.263 [2024-10-06 11:30:27.718133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.263 [2024-10-06 11:30:27.718140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.263 [2024-10-06 11:30:27.718146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.263 [2024-10-06 11:30:27.718160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.263 qpair failed and we were unable to recover it. 00:35:30.263 [2024-10-06 11:30:27.728077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.263 [2024-10-06 11:30:27.728149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.263 [2024-10-06 11:30:27.728166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.263 [2024-10-06 11:30:27.728173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.263 [2024-10-06 11:30:27.728179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.263 [2024-10-06 11:30:27.728194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.263 qpair failed and we were unable to recover it. 00:35:30.263 [2024-10-06 11:30:27.738109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.263 [2024-10-06 11:30:27.738171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.263 [2024-10-06 11:30:27.738186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.263 [2024-10-06 11:30:27.738192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.263 [2024-10-06 11:30:27.738198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.263 [2024-10-06 11:30:27.738212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.263 qpair failed and we were unable to recover it. 00:35:30.263 [2024-10-06 11:30:27.748135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.263 [2024-10-06 11:30:27.748201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.263 [2024-10-06 11:30:27.748216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.263 [2024-10-06 11:30:27.748222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.263 [2024-10-06 11:30:27.748228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.263 [2024-10-06 11:30:27.748243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.263 qpair failed and we were unable to recover it. 00:35:30.263 [2024-10-06 11:30:27.758161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.263 [2024-10-06 11:30:27.758248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.263 [2024-10-06 11:30:27.758262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.263 [2024-10-06 11:30:27.758269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.263 [2024-10-06 11:30:27.758275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.263 [2024-10-06 11:30:27.758289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.263 qpair failed and we were unable to recover it. 00:35:30.263 [2024-10-06 11:30:27.768219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.263 [2024-10-06 11:30:27.768328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.263 [2024-10-06 11:30:27.768350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.263 [2024-10-06 11:30:27.768357] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.263 [2024-10-06 11:30:27.768366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.263 [2024-10-06 11:30:27.768381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.263 qpair failed and we were unable to recover it. 00:35:30.263 [2024-10-06 11:30:27.778251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.263 [2024-10-06 11:30:27.778316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.263 [2024-10-06 11:30:27.778330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.263 [2024-10-06 11:30:27.778336] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.263 [2024-10-06 11:30:27.778342] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.263 [2024-10-06 11:30:27.778356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.263 qpair failed and we were unable to recover it. 00:35:30.263 [2024-10-06 11:30:27.788280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.263 [2024-10-06 11:30:27.788346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.263 [2024-10-06 11:30:27.788360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.263 [2024-10-06 11:30:27.788366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.264 [2024-10-06 11:30:27.788372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.264 [2024-10-06 11:30:27.788386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.264 qpair failed and we were unable to recover it. 00:35:30.264 [2024-10-06 11:30:27.798298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.264 [2024-10-06 11:30:27.798373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.264 [2024-10-06 11:30:27.798387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.264 [2024-10-06 11:30:27.798394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.264 [2024-10-06 11:30:27.798399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.264 [2024-10-06 11:30:27.798414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.264 qpair failed and we were unable to recover it. 00:35:30.264 [2024-10-06 11:30:27.808310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.264 [2024-10-06 11:30:27.808371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.264 [2024-10-06 11:30:27.808385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.264 [2024-10-06 11:30:27.808392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.264 [2024-10-06 11:30:27.808398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.264 [2024-10-06 11:30:27.808412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.264 qpair failed and we were unable to recover it. 00:35:30.264 [2024-10-06 11:30:27.818408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.264 [2024-10-06 11:30:27.818498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.264 [2024-10-06 11:30:27.818513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.264 [2024-10-06 11:30:27.818519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.264 [2024-10-06 11:30:27.818525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.264 [2024-10-06 11:30:27.818540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.264 qpair failed and we were unable to recover it. 00:35:30.264 [2024-10-06 11:30:27.828383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.264 [2024-10-06 11:30:27.828448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.264 [2024-10-06 11:30:27.828462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.264 [2024-10-06 11:30:27.828469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.264 [2024-10-06 11:30:27.828475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.264 [2024-10-06 11:30:27.828490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.264 qpair failed and we were unable to recover it. 00:35:30.525 [2024-10-06 11:30:27.838442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.525 [2024-10-06 11:30:27.838510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.525 [2024-10-06 11:30:27.838524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.525 [2024-10-06 11:30:27.838531] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.525 [2024-10-06 11:30:27.838537] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.525 [2024-10-06 11:30:27.838552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.525 qpair failed and we were unable to recover it. 00:35:30.525 [2024-10-06 11:30:27.848445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.525 [2024-10-06 11:30:27.848520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.525 [2024-10-06 11:30:27.848535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.525 [2024-10-06 11:30:27.848541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.525 [2024-10-06 11:30:27.848547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.525 [2024-10-06 11:30:27.848562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.525 qpair failed and we were unable to recover it. 00:35:30.525 [2024-10-06 11:30:27.858449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.525 [2024-10-06 11:30:27.858514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.525 [2024-10-06 11:30:27.858528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.525 [2024-10-06 11:30:27.858535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.525 [2024-10-06 11:30:27.858544] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.525 [2024-10-06 11:30:27.858558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.525 qpair failed and we were unable to recover it. 00:35:30.525 [2024-10-06 11:30:27.868489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.525 [2024-10-06 11:30:27.868555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.525 [2024-10-06 11:30:27.868569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.525 [2024-10-06 11:30:27.868576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.525 [2024-10-06 11:30:27.868582] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.525 [2024-10-06 11:30:27.868596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.525 qpair failed and we were unable to recover it. 00:35:30.525 [2024-10-06 11:30:27.878523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.525 [2024-10-06 11:30:27.878587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.525 [2024-10-06 11:30:27.878601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.525 [2024-10-06 11:30:27.878608] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.525 [2024-10-06 11:30:27.878614] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.525 [2024-10-06 11:30:27.878629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.525 qpair failed and we were unable to recover it. 00:35:30.525 [2024-10-06 11:30:27.888600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.525 [2024-10-06 11:30:27.888659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.525 [2024-10-06 11:30:27.888673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.525 [2024-10-06 11:30:27.888679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.525 [2024-10-06 11:30:27.888686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.525 [2024-10-06 11:30:27.888701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.525 qpair failed and we were unable to recover it. 00:35:30.525 [2024-10-06 11:30:27.898564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.525 [2024-10-06 11:30:27.898634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.525 [2024-10-06 11:30:27.898648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.525 [2024-10-06 11:30:27.898655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.525 [2024-10-06 11:30:27.898661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.525 [2024-10-06 11:30:27.898675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.525 qpair failed and we were unable to recover it. 00:35:30.525 [2024-10-06 11:30:27.908614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.525 [2024-10-06 11:30:27.908680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.525 [2024-10-06 11:30:27.908694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.525 [2024-10-06 11:30:27.908701] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.525 [2024-10-06 11:30:27.908707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.525 [2024-10-06 11:30:27.908721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.525 qpair failed and we were unable to recover it. 00:35:30.525 [2024-10-06 11:30:27.918635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.525 [2024-10-06 11:30:27.918696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.525 [2024-10-06 11:30:27.918710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.525 [2024-10-06 11:30:27.918716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.525 [2024-10-06 11:30:27.918722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.525 [2024-10-06 11:30:27.918737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.525 qpair failed and we were unable to recover it. 00:35:30.525 [2024-10-06 11:30:27.928659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.525 [2024-10-06 11:30:27.928722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.525 [2024-10-06 11:30:27.928736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.526 [2024-10-06 11:30:27.928743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.526 [2024-10-06 11:30:27.928749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.526 [2024-10-06 11:30:27.928763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.526 qpair failed and we were unable to recover it. 00:35:30.526 [2024-10-06 11:30:27.938707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.526 [2024-10-06 11:30:27.938817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.526 [2024-10-06 11:30:27.938839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.526 [2024-10-06 11:30:27.938845] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.526 [2024-10-06 11:30:27.938851] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.526 [2024-10-06 11:30:27.938866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.526 qpair failed and we were unable to recover it. 00:35:30.526 [2024-10-06 11:30:27.948727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.526 [2024-10-06 11:30:27.948839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.526 [2024-10-06 11:30:27.948862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.526 [2024-10-06 11:30:27.948872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.526 [2024-10-06 11:30:27.948878] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.526 [2024-10-06 11:30:27.948893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.526 qpair failed and we were unable to recover it. 00:35:30.526 [2024-10-06 11:30:27.958764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.526 [2024-10-06 11:30:27.958868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.526 [2024-10-06 11:30:27.958882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.526 [2024-10-06 11:30:27.958889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.526 [2024-10-06 11:30:27.958894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.526 [2024-10-06 11:30:27.958910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.526 qpair failed and we were unable to recover it. 00:35:30.526 [2024-10-06 11:30:27.968745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.526 [2024-10-06 11:30:27.968845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.526 [2024-10-06 11:30:27.968859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.526 [2024-10-06 11:30:27.968866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.526 [2024-10-06 11:30:27.968871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.526 [2024-10-06 11:30:27.968887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.526 qpair failed and we were unable to recover it. 00:35:30.526 [2024-10-06 11:30:27.978798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.526 [2024-10-06 11:30:27.978862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.526 [2024-10-06 11:30:27.978876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.526 [2024-10-06 11:30:27.978883] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.526 [2024-10-06 11:30:27.978889] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.526 [2024-10-06 11:30:27.978903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.526 qpair failed and we were unable to recover it. 00:35:30.526 [2024-10-06 11:30:27.988862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.526 [2024-10-06 11:30:27.988931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.526 [2024-10-06 11:30:27.988945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.526 [2024-10-06 11:30:27.988952] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.526 [2024-10-06 11:30:27.988957] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.526 [2024-10-06 11:30:27.988971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.526 qpair failed and we were unable to recover it. 00:35:30.526 [2024-10-06 11:30:27.998858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.526 [2024-10-06 11:30:27.998924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.526 [2024-10-06 11:30:27.998938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.526 [2024-10-06 11:30:27.998944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.526 [2024-10-06 11:30:27.998950] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.526 [2024-10-06 11:30:27.998964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.526 qpair failed and we were unable to recover it. 00:35:30.526 [2024-10-06 11:30:28.008891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.526 [2024-10-06 11:30:28.008958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.526 [2024-10-06 11:30:28.008972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.526 [2024-10-06 11:30:28.008979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.526 [2024-10-06 11:30:28.008985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.526 [2024-10-06 11:30:28.008999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.526 qpair failed and we were unable to recover it. 00:35:30.526 [2024-10-06 11:30:28.018932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.526 [2024-10-06 11:30:28.018991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.526 [2024-10-06 11:30:28.019006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.526 [2024-10-06 11:30:28.019013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.526 [2024-10-06 11:30:28.019018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.526 [2024-10-06 11:30:28.019034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.526 qpair failed and we were unable to recover it. 00:35:30.526 [2024-10-06 11:30:28.029003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.526 [2024-10-06 11:30:28.029111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.526 [2024-10-06 11:30:28.029125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.526 [2024-10-06 11:30:28.029132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.526 [2024-10-06 11:30:28.029138] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.526 [2024-10-06 11:30:28.029152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.526 qpair failed and we were unable to recover it. 00:35:30.526 [2024-10-06 11:30:28.038993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.526 [2024-10-06 11:30:28.039098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.526 [2024-10-06 11:30:28.039113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.526 [2024-10-06 11:30:28.039122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.526 [2024-10-06 11:30:28.039128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.526 [2024-10-06 11:30:28.039143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.526 qpair failed and we were unable to recover it. 00:35:30.526 [2024-10-06 11:30:28.049020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.526 [2024-10-06 11:30:28.049083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.526 [2024-10-06 11:30:28.049098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.526 [2024-10-06 11:30:28.049105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.526 [2024-10-06 11:30:28.049111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.526 [2024-10-06 11:30:28.049126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.526 qpair failed and we were unable to recover it. 00:35:30.526 [2024-10-06 11:30:28.059043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.527 [2024-10-06 11:30:28.059112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.527 [2024-10-06 11:30:28.059127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.527 [2024-10-06 11:30:28.059134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.527 [2024-10-06 11:30:28.059140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.527 [2024-10-06 11:30:28.059155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.527 qpair failed and we were unable to recover it. 00:35:30.527 [2024-10-06 11:30:28.069073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.527 [2024-10-06 11:30:28.069151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.527 [2024-10-06 11:30:28.069166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.527 [2024-10-06 11:30:28.069172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.527 [2024-10-06 11:30:28.069178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.527 [2024-10-06 11:30:28.069193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.527 qpair failed and we were unable to recover it. 00:35:30.527 [2024-10-06 11:30:28.079143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.527 [2024-10-06 11:30:28.079210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.527 [2024-10-06 11:30:28.079225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.527 [2024-10-06 11:30:28.079232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.527 [2024-10-06 11:30:28.079237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.527 [2024-10-06 11:30:28.079252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.527 qpair failed and we were unable to recover it. 00:35:30.527 [2024-10-06 11:30:28.089152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.527 [2024-10-06 11:30:28.089235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.527 [2024-10-06 11:30:28.089250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.527 [2024-10-06 11:30:28.089256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.527 [2024-10-06 11:30:28.089262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.527 [2024-10-06 11:30:28.089277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.527 qpair failed and we were unable to recover it. 00:35:30.788 [2024-10-06 11:30:28.099202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.788 [2024-10-06 11:30:28.099314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.788 [2024-10-06 11:30:28.099330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.788 [2024-10-06 11:30:28.099336] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.788 [2024-10-06 11:30:28.099343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.788 [2024-10-06 11:30:28.099358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.788 qpair failed and we were unable to recover it. 00:35:30.788 [2024-10-06 11:30:28.109125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.788 [2024-10-06 11:30:28.109192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.788 [2024-10-06 11:30:28.109207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.788 [2024-10-06 11:30:28.109214] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.788 [2024-10-06 11:30:28.109219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.788 [2024-10-06 11:30:28.109234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.788 qpair failed and we were unable to recover it. 00:35:30.788 [2024-10-06 11:30:28.119234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.788 [2024-10-06 11:30:28.119343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.788 [2024-10-06 11:30:28.119357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.788 [2024-10-06 11:30:28.119364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.788 [2024-10-06 11:30:28.119370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.788 [2024-10-06 11:30:28.119385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.788 qpair failed and we were unable to recover it. 00:35:30.788 [2024-10-06 11:30:28.129232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.788 [2024-10-06 11:30:28.129295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.788 [2024-10-06 11:30:28.129313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.788 [2024-10-06 11:30:28.129320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.788 [2024-10-06 11:30:28.129325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.788 [2024-10-06 11:30:28.129340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.788 qpair failed and we were unable to recover it. 00:35:30.788 [2024-10-06 11:30:28.139221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.788 [2024-10-06 11:30:28.139287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.789 [2024-10-06 11:30:28.139302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.789 [2024-10-06 11:30:28.139308] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.789 [2024-10-06 11:30:28.139314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.789 [2024-10-06 11:30:28.139329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.789 qpair failed and we were unable to recover it. 00:35:30.789 [2024-10-06 11:30:28.149335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.789 [2024-10-06 11:30:28.149405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.789 [2024-10-06 11:30:28.149420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.789 [2024-10-06 11:30:28.149427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.789 [2024-10-06 11:30:28.149433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.789 [2024-10-06 11:30:28.149447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.789 qpair failed and we were unable to recover it. 00:35:30.789 [2024-10-06 11:30:28.159323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.789 [2024-10-06 11:30:28.159390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.789 [2024-10-06 11:30:28.159404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.789 [2024-10-06 11:30:28.159410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.789 [2024-10-06 11:30:28.159416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.789 [2024-10-06 11:30:28.159431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.789 qpair failed and we were unable to recover it. 00:35:30.789 [2024-10-06 11:30:28.169381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.789 [2024-10-06 11:30:28.169491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.789 [2024-10-06 11:30:28.169514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.789 [2024-10-06 11:30:28.169520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.789 [2024-10-06 11:30:28.169526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.789 [2024-10-06 11:30:28.169544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.789 qpair failed and we were unable to recover it. 00:35:30.789 [2024-10-06 11:30:28.179395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.789 [2024-10-06 11:30:28.179460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.789 [2024-10-06 11:30:28.179475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.789 [2024-10-06 11:30:28.179481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.789 [2024-10-06 11:30:28.179487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.789 [2024-10-06 11:30:28.179501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.789 qpair failed and we were unable to recover it. 00:35:30.789 [2024-10-06 11:30:28.189416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.789 [2024-10-06 11:30:28.189518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.789 [2024-10-06 11:30:28.189533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.789 [2024-10-06 11:30:28.189539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.789 [2024-10-06 11:30:28.189545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.789 [2024-10-06 11:30:28.189559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.789 qpair failed and we were unable to recover it. 00:35:30.789 [2024-10-06 11:30:28.199447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.789 [2024-10-06 11:30:28.199512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.789 [2024-10-06 11:30:28.199526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.789 [2024-10-06 11:30:28.199532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.789 [2024-10-06 11:30:28.199538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.789 [2024-10-06 11:30:28.199553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.789 qpair failed and we were unable to recover it. 00:35:30.789 [2024-10-06 11:30:28.209476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.789 [2024-10-06 11:30:28.209540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.789 [2024-10-06 11:30:28.209554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.789 [2024-10-06 11:30:28.209560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.789 [2024-10-06 11:30:28.209566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.789 [2024-10-06 11:30:28.209581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.789 qpair failed and we were unable to recover it. 00:35:30.789 [2024-10-06 11:30:28.219516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.789 [2024-10-06 11:30:28.219578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.789 [2024-10-06 11:30:28.219595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.789 [2024-10-06 11:30:28.219601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.789 [2024-10-06 11:30:28.219607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.789 [2024-10-06 11:30:28.219622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.789 qpair failed and we were unable to recover it. 00:35:30.789 [2024-10-06 11:30:28.229532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.789 [2024-10-06 11:30:28.229599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.789 [2024-10-06 11:30:28.229613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.789 [2024-10-06 11:30:28.229620] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.789 [2024-10-06 11:30:28.229625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.789 [2024-10-06 11:30:28.229640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.789 qpair failed and we were unable to recover it. 00:35:30.789 [2024-10-06 11:30:28.239559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.789 [2024-10-06 11:30:28.239624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.789 [2024-10-06 11:30:28.239638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.789 [2024-10-06 11:30:28.239645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.789 [2024-10-06 11:30:28.239650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.789 [2024-10-06 11:30:28.239665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.789 qpair failed and we were unable to recover it. 00:35:30.789 [2024-10-06 11:30:28.249608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.789 [2024-10-06 11:30:28.249680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.789 [2024-10-06 11:30:28.249695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.789 [2024-10-06 11:30:28.249702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.789 [2024-10-06 11:30:28.249707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.789 [2024-10-06 11:30:28.249722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.789 qpair failed and we were unable to recover it. 00:35:30.789 [2024-10-06 11:30:28.259631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.789 [2024-10-06 11:30:28.259691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.789 [2024-10-06 11:30:28.259706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.789 [2024-10-06 11:30:28.259712] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.789 [2024-10-06 11:30:28.259724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.789 [2024-10-06 11:30:28.259739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.789 qpair failed and we were unable to recover it. 00:35:30.789 [2024-10-06 11:30:28.269672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.789 [2024-10-06 11:30:28.269744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.789 [2024-10-06 11:30:28.269758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.790 [2024-10-06 11:30:28.269764] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.790 [2024-10-06 11:30:28.269770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.790 [2024-10-06 11:30:28.269784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.790 qpair failed and we were unable to recover it. 00:35:30.790 [2024-10-06 11:30:28.279698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.790 [2024-10-06 11:30:28.279800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.790 [2024-10-06 11:30:28.279815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.790 [2024-10-06 11:30:28.279821] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.790 [2024-10-06 11:30:28.279827] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.790 [2024-10-06 11:30:28.279842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.790 qpair failed and we were unable to recover it. 00:35:30.790 [2024-10-06 11:30:28.289695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.790 [2024-10-06 11:30:28.289757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.790 [2024-10-06 11:30:28.289771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.790 [2024-10-06 11:30:28.289777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.790 [2024-10-06 11:30:28.289783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.790 [2024-10-06 11:30:28.289797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.790 qpair failed and we were unable to recover it. 00:35:30.790 [2024-10-06 11:30:28.299661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.790 [2024-10-06 11:30:28.299724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.790 [2024-10-06 11:30:28.299738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.790 [2024-10-06 11:30:28.299745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.790 [2024-10-06 11:30:28.299751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.790 [2024-10-06 11:30:28.299766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.790 qpair failed and we were unable to recover it. 00:35:30.790 [2024-10-06 11:30:28.309783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.790 [2024-10-06 11:30:28.309850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.790 [2024-10-06 11:30:28.309865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.790 [2024-10-06 11:30:28.309871] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.790 [2024-10-06 11:30:28.309877] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.790 [2024-10-06 11:30:28.309892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.790 qpair failed and we were unable to recover it. 00:35:30.790 [2024-10-06 11:30:28.319807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.790 [2024-10-06 11:30:28.319878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.790 [2024-10-06 11:30:28.319892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.790 [2024-10-06 11:30:28.319899] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.790 [2024-10-06 11:30:28.319905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.790 [2024-10-06 11:30:28.319920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.790 qpair failed and we were unable to recover it. 00:35:30.790 [2024-10-06 11:30:28.329857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.790 [2024-10-06 11:30:28.329963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.790 [2024-10-06 11:30:28.329977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.790 [2024-10-06 11:30:28.329984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.790 [2024-10-06 11:30:28.329990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.790 [2024-10-06 11:30:28.330005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.790 qpair failed and we were unable to recover it. 00:35:30.790 [2024-10-06 11:30:28.339859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.790 [2024-10-06 11:30:28.339933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.790 [2024-10-06 11:30:28.339947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.790 [2024-10-06 11:30:28.339954] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.790 [2024-10-06 11:30:28.339960] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.790 [2024-10-06 11:30:28.339974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.790 qpair failed and we were unable to recover it. 00:35:30.790 [2024-10-06 11:30:28.349940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.790 [2024-10-06 11:30:28.350036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.790 [2024-10-06 11:30:28.350051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.790 [2024-10-06 11:30:28.350061] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.790 [2024-10-06 11:30:28.350070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.790 [2024-10-06 11:30:28.350085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.790 qpair failed and we were unable to recover it. 00:35:30.790 [2024-10-06 11:30:28.359842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:30.790 [2024-10-06 11:30:28.359909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:30.790 [2024-10-06 11:30:28.359923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:30.790 [2024-10-06 11:30:28.359929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:30.790 [2024-10-06 11:30:28.359935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:30.790 [2024-10-06 11:30:28.359950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.790 qpair failed and we were unable to recover it. 00:35:31.051 [2024-10-06 11:30:28.369957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.051 [2024-10-06 11:30:28.370025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.051 [2024-10-06 11:30:28.370040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.051 [2024-10-06 11:30:28.370046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.051 [2024-10-06 11:30:28.370053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.051 [2024-10-06 11:30:28.370071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.051 qpair failed and we were unable to recover it. 00:35:31.051 [2024-10-06 11:30:28.380009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.051 [2024-10-06 11:30:28.380115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.051 [2024-10-06 11:30:28.380129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.051 [2024-10-06 11:30:28.380136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.051 [2024-10-06 11:30:28.380142] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.051 [2024-10-06 11:30:28.380156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.051 qpair failed and we were unable to recover it. 00:35:31.051 [2024-10-06 11:30:28.390011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.051 [2024-10-06 11:30:28.390082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.051 [2024-10-06 11:30:28.390097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.051 [2024-10-06 11:30:28.390103] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.051 [2024-10-06 11:30:28.390109] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.051 [2024-10-06 11:30:28.390124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.051 qpair failed and we were unable to recover it. 00:35:31.051 [2024-10-06 11:30:28.400024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.051 [2024-10-06 11:30:28.400092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.051 [2024-10-06 11:30:28.400107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.051 [2024-10-06 11:30:28.400114] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.051 [2024-10-06 11:30:28.400120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.051 [2024-10-06 11:30:28.400134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.051 qpair failed and we were unable to recover it. 00:35:31.051 [2024-10-06 11:30:28.410030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.051 [2024-10-06 11:30:28.410094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.051 [2024-10-06 11:30:28.410110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.051 [2024-10-06 11:30:28.410116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.051 [2024-10-06 11:30:28.410122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.051 [2024-10-06 11:30:28.410137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.051 qpair failed and we were unable to recover it. 00:35:31.051 [2024-10-06 11:30:28.420069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.051 [2024-10-06 11:30:28.420173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.051 [2024-10-06 11:30:28.420188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.051 [2024-10-06 11:30:28.420195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.051 [2024-10-06 11:30:28.420201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.051 [2024-10-06 11:30:28.420216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.051 qpair failed and we were unable to recover it. 00:35:31.051 [2024-10-06 11:30:28.430101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.051 [2024-10-06 11:30:28.430170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.051 [2024-10-06 11:30:28.430184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.051 [2024-10-06 11:30:28.430191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.051 [2024-10-06 11:30:28.430196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.051 [2024-10-06 11:30:28.430211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.051 qpair failed and we were unable to recover it. 00:35:31.051 [2024-10-06 11:30:28.440164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.051 [2024-10-06 11:30:28.440223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.051 [2024-10-06 11:30:28.440238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.051 [2024-10-06 11:30:28.440247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.051 [2024-10-06 11:30:28.440253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.051 [2024-10-06 11:30:28.440268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.051 qpair failed and we were unable to recover it. 00:35:31.051 [2024-10-06 11:30:28.450096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.051 [2024-10-06 11:30:28.450177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.051 [2024-10-06 11:30:28.450191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.051 [2024-10-06 11:30:28.450198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.051 [2024-10-06 11:30:28.450203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.051 [2024-10-06 11:30:28.450219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.051 qpair failed and we were unable to recover it. 00:35:31.051 [2024-10-06 11:30:28.460184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.051 [2024-10-06 11:30:28.460251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.051 [2024-10-06 11:30:28.460265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.051 [2024-10-06 11:30:28.460272] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.051 [2024-10-06 11:30:28.460278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.051 [2024-10-06 11:30:28.460293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.051 qpair failed and we were unable to recover it. 00:35:31.051 [2024-10-06 11:30:28.470205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.051 [2024-10-06 11:30:28.470274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.051 [2024-10-06 11:30:28.470288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.051 [2024-10-06 11:30:28.470294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.052 [2024-10-06 11:30:28.470300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.052 [2024-10-06 11:30:28.470315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.052 qpair failed and we were unable to recover it. 00:35:31.052 [2024-10-06 11:30:28.480229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.052 [2024-10-06 11:30:28.480292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.052 [2024-10-06 11:30:28.480306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.052 [2024-10-06 11:30:28.480312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.052 [2024-10-06 11:30:28.480318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.052 [2024-10-06 11:30:28.480332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.052 qpair failed and we were unable to recover it. 00:35:31.052 [2024-10-06 11:30:28.490259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.052 [2024-10-06 11:30:28.490327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.052 [2024-10-06 11:30:28.490341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.052 [2024-10-06 11:30:28.490348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.052 [2024-10-06 11:30:28.490354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.052 [2024-10-06 11:30:28.490369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.052 qpair failed and we were unable to recover it. 00:35:31.052 [2024-10-06 11:30:28.500296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.052 [2024-10-06 11:30:28.500361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.052 [2024-10-06 11:30:28.500375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.052 [2024-10-06 11:30:28.500381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.052 [2024-10-06 11:30:28.500387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.052 [2024-10-06 11:30:28.500401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.052 qpair failed and we were unable to recover it. 00:35:31.052 [2024-10-06 11:30:28.510338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.052 [2024-10-06 11:30:28.510404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.052 [2024-10-06 11:30:28.510419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.052 [2024-10-06 11:30:28.510426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.052 [2024-10-06 11:30:28.510432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.052 [2024-10-06 11:30:28.510447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.052 qpair failed and we were unable to recover it. 00:35:31.052 [2024-10-06 11:30:28.520425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.052 [2024-10-06 11:30:28.520491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.052 [2024-10-06 11:30:28.520506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.052 [2024-10-06 11:30:28.520512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.052 [2024-10-06 11:30:28.520518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.052 [2024-10-06 11:30:28.520533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.052 qpair failed and we were unable to recover it. 00:35:31.052 [2024-10-06 11:30:28.530401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.052 [2024-10-06 11:30:28.530508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.052 [2024-10-06 11:30:28.530522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.052 [2024-10-06 11:30:28.530532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.052 [2024-10-06 11:30:28.530538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.052 [2024-10-06 11:30:28.530553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.052 qpair failed and we were unable to recover it. 00:35:31.052 [2024-10-06 11:30:28.540419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.052 [2024-10-06 11:30:28.540482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.052 [2024-10-06 11:30:28.540496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.052 [2024-10-06 11:30:28.540503] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.052 [2024-10-06 11:30:28.540509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.052 [2024-10-06 11:30:28.540523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.052 qpair failed and we were unable to recover it. 00:35:31.052 [2024-10-06 11:30:28.550469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.052 [2024-10-06 11:30:28.550536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.052 [2024-10-06 11:30:28.550551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.052 [2024-10-06 11:30:28.550558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.052 [2024-10-06 11:30:28.550563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.052 [2024-10-06 11:30:28.550578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.052 qpair failed and we were unable to recover it. 00:35:31.052 [2024-10-06 11:30:28.560496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.052 [2024-10-06 11:30:28.560590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.052 [2024-10-06 11:30:28.560605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.052 [2024-10-06 11:30:28.560611] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.052 [2024-10-06 11:30:28.560617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.052 [2024-10-06 11:30:28.560633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.052 qpair failed and we were unable to recover it. 00:35:31.052 [2024-10-06 11:30:28.570471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.052 [2024-10-06 11:30:28.570572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.052 [2024-10-06 11:30:28.570587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.052 [2024-10-06 11:30:28.570593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.052 [2024-10-06 11:30:28.570599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.052 [2024-10-06 11:30:28.570614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.052 qpair failed and we were unable to recover it. 00:35:31.052 [2024-10-06 11:30:28.580507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.052 [2024-10-06 11:30:28.580571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.052 [2024-10-06 11:30:28.580586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.052 [2024-10-06 11:30:28.580592] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.052 [2024-10-06 11:30:28.580597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.052 [2024-10-06 11:30:28.580612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.052 qpair failed and we were unable to recover it. 00:35:31.052 [2024-10-06 11:30:28.590555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.052 [2024-10-06 11:30:28.590628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.052 [2024-10-06 11:30:28.590643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.052 [2024-10-06 11:30:28.590649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.052 [2024-10-06 11:30:28.590655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.053 [2024-10-06 11:30:28.590669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.053 qpair failed and we were unable to recover it. 00:35:31.053 [2024-10-06 11:30:28.600602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.053 [2024-10-06 11:30:28.600668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.053 [2024-10-06 11:30:28.600682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.053 [2024-10-06 11:30:28.600689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.053 [2024-10-06 11:30:28.600694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.053 [2024-10-06 11:30:28.600709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.053 qpair failed and we were unable to recover it. 00:35:31.053 [2024-10-06 11:30:28.610558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.053 [2024-10-06 11:30:28.610625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.053 [2024-10-06 11:30:28.610640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.053 [2024-10-06 11:30:28.610647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.053 [2024-10-06 11:30:28.610653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.053 [2024-10-06 11:30:28.610668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.053 qpair failed and we were unable to recover it. 00:35:31.053 [2024-10-06 11:30:28.620591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.053 [2024-10-06 11:30:28.620657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.053 [2024-10-06 11:30:28.620675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.053 [2024-10-06 11:30:28.620682] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.053 [2024-10-06 11:30:28.620688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.053 [2024-10-06 11:30:28.620702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.053 qpair failed and we were unable to recover it. 00:35:31.313 [2024-10-06 11:30:28.630619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.313 [2024-10-06 11:30:28.630735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.313 [2024-10-06 11:30:28.630756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.313 [2024-10-06 11:30:28.630763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.313 [2024-10-06 11:30:28.630769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.313 [2024-10-06 11:30:28.630784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.313 qpair failed and we were unable to recover it. 00:35:31.313 [2024-10-06 11:30:28.640636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.313 [2024-10-06 11:30:28.640703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.313 [2024-10-06 11:30:28.640718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.313 [2024-10-06 11:30:28.640724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.313 [2024-10-06 11:30:28.640730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.313 [2024-10-06 11:30:28.640745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.313 qpair failed and we were unable to recover it. 00:35:31.313 [2024-10-06 11:30:28.650661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.313 [2024-10-06 11:30:28.650726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.313 [2024-10-06 11:30:28.650741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.313 [2024-10-06 11:30:28.650748] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.313 [2024-10-06 11:30:28.650754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.313 [2024-10-06 11:30:28.650769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.313 qpair failed and we were unable to recover it. 00:35:31.313 [2024-10-06 11:30:28.660717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.313 [2024-10-06 11:30:28.660790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.313 [2024-10-06 11:30:28.660805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.313 [2024-10-06 11:30:28.660812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.313 [2024-10-06 11:30:28.660817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.313 [2024-10-06 11:30:28.660836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.313 qpair failed and we were unable to recover it. 00:35:31.313 [2024-10-06 11:30:28.670770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.313 [2024-10-06 11:30:28.670836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.313 [2024-10-06 11:30:28.670851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.313 [2024-10-06 11:30:28.670858] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.313 [2024-10-06 11:30:28.670864] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.313 [2024-10-06 11:30:28.670878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.313 qpair failed and we were unable to recover it. 00:35:31.313 [2024-10-06 11:30:28.680869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.313 [2024-10-06 11:30:28.680970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.313 [2024-10-06 11:30:28.680985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.313 [2024-10-06 11:30:28.680991] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.313 [2024-10-06 11:30:28.680998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.313 [2024-10-06 11:30:28.681013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.313 qpair failed and we were unable to recover it. 00:35:31.313 [2024-10-06 11:30:28.690801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.313 [2024-10-06 11:30:28.690868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.313 [2024-10-06 11:30:28.690882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.313 [2024-10-06 11:30:28.690889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.313 [2024-10-06 11:30:28.690895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.313 [2024-10-06 11:30:28.690910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.313 qpair failed and we were unable to recover it. 00:35:31.313 [2024-10-06 11:30:28.700794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.313 [2024-10-06 11:30:28.700859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.313 [2024-10-06 11:30:28.700874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.313 [2024-10-06 11:30:28.700881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.313 [2024-10-06 11:30:28.700887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.313 [2024-10-06 11:30:28.700902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.313 qpair failed and we were unable to recover it. 00:35:31.313 [2024-10-06 11:30:28.710827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.313 [2024-10-06 11:30:28.710917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.313 [2024-10-06 11:30:28.710934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.313 [2024-10-06 11:30:28.710941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.313 [2024-10-06 11:30:28.710947] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.313 [2024-10-06 11:30:28.710961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.313 qpair failed and we were unable to recover it. 00:35:31.313 [2024-10-06 11:30:28.720932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.313 [2024-10-06 11:30:28.721015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.313 [2024-10-06 11:30:28.721030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.313 [2024-10-06 11:30:28.721037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.313 [2024-10-06 11:30:28.721043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.313 [2024-10-06 11:30:28.721061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.313 qpair failed and we were unable to recover it. 00:35:31.313 [2024-10-06 11:30:28.730971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.313 [2024-10-06 11:30:28.731050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.314 [2024-10-06 11:30:28.731069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.314 [2024-10-06 11:30:28.731076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.314 [2024-10-06 11:30:28.731082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.314 [2024-10-06 11:30:28.731096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.314 qpair failed and we were unable to recover it. 00:35:31.314 [2024-10-06 11:30:28.740946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.314 [2024-10-06 11:30:28.741029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.314 [2024-10-06 11:30:28.741043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.314 [2024-10-06 11:30:28.741050] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.314 [2024-10-06 11:30:28.741056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.314 [2024-10-06 11:30:28.741080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.314 qpair failed and we were unable to recover it. 00:35:31.314 [2024-10-06 11:30:28.750955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.314 [2024-10-06 11:30:28.751022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.314 [2024-10-06 11:30:28.751036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.314 [2024-10-06 11:30:28.751043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.314 [2024-10-06 11:30:28.751049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.314 [2024-10-06 11:30:28.751071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.314 qpair failed and we were unable to recover it. 00:35:31.314 [2024-10-06 11:30:28.761024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.314 [2024-10-06 11:30:28.761088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.314 [2024-10-06 11:30:28.761103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.314 [2024-10-06 11:30:28.761110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.314 [2024-10-06 11:30:28.761116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.314 [2024-10-06 11:30:28.761131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.314 qpair failed and we were unable to recover it. 00:35:31.314 [2024-10-06 11:30:28.771048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.314 [2024-10-06 11:30:28.771114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.314 [2024-10-06 11:30:28.771129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.314 [2024-10-06 11:30:28.771136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.314 [2024-10-06 11:30:28.771141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.314 [2024-10-06 11:30:28.771156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.314 qpair failed and we were unable to recover it. 00:35:31.314 [2024-10-06 11:30:28.781094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.314 [2024-10-06 11:30:28.781161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.314 [2024-10-06 11:30:28.781176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.314 [2024-10-06 11:30:28.781182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.314 [2024-10-06 11:30:28.781188] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.314 [2024-10-06 11:30:28.781203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.314 qpair failed and we were unable to recover it. 00:35:31.314 [2024-10-06 11:30:28.791169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.314 [2024-10-06 11:30:28.791235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.314 [2024-10-06 11:30:28.791251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.314 [2024-10-06 11:30:28.791257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.314 [2024-10-06 11:30:28.791263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.314 [2024-10-06 11:30:28.791278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.314 qpair failed and we were unable to recover it. 00:35:31.314 [2024-10-06 11:30:28.801174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.314 [2024-10-06 11:30:28.801245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.314 [2024-10-06 11:30:28.801262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.314 [2024-10-06 11:30:28.801268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.314 [2024-10-06 11:30:28.801274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.314 [2024-10-06 11:30:28.801289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.314 qpair failed and we were unable to recover it. 00:35:31.314 [2024-10-06 11:30:28.811182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.314 [2024-10-06 11:30:28.811248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.314 [2024-10-06 11:30:28.811263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.314 [2024-10-06 11:30:28.811269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.314 [2024-10-06 11:30:28.811275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.314 [2024-10-06 11:30:28.811290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.314 qpair failed and we were unable to recover it. 00:35:31.314 [2024-10-06 11:30:28.821168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.314 [2024-10-06 11:30:28.821238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.314 [2024-10-06 11:30:28.821252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.314 [2024-10-06 11:30:28.821259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.314 [2024-10-06 11:30:28.821264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.314 [2024-10-06 11:30:28.821279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.314 qpair failed and we were unable to recover it. 00:35:31.314 [2024-10-06 11:30:28.831234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.314 [2024-10-06 11:30:28.831342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.314 [2024-10-06 11:30:28.831356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.314 [2024-10-06 11:30:28.831363] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.314 [2024-10-06 11:30:28.831369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.314 [2024-10-06 11:30:28.831384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.314 qpair failed and we were unable to recover it. 00:35:31.314 [2024-10-06 11:30:28.841272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.314 [2024-10-06 11:30:28.841338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.314 [2024-10-06 11:30:28.841352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.314 [2024-10-06 11:30:28.841358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.314 [2024-10-06 11:30:28.841368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.314 [2024-10-06 11:30:28.841382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.314 qpair failed and we were unable to recover it. 00:35:31.314 [2024-10-06 11:30:28.851222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.314 [2024-10-06 11:30:28.851285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.314 [2024-10-06 11:30:28.851299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.314 [2024-10-06 11:30:28.851306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.314 [2024-10-06 11:30:28.851311] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.314 [2024-10-06 11:30:28.851326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.314 qpair failed and we were unable to recover it. 00:35:31.314 [2024-10-06 11:30:28.861354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.315 [2024-10-06 11:30:28.861468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.315 [2024-10-06 11:30:28.861483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.315 [2024-10-06 11:30:28.861490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.315 [2024-10-06 11:30:28.861496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.315 [2024-10-06 11:30:28.861511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.315 qpair failed and we were unable to recover it. 00:35:31.315 [2024-10-06 11:30:28.871274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.315 [2024-10-06 11:30:28.871343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.315 [2024-10-06 11:30:28.871358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.315 [2024-10-06 11:30:28.871364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.315 [2024-10-06 11:30:28.871370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.315 [2024-10-06 11:30:28.871385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.315 qpair failed and we were unable to recover it. 00:35:31.315 [2024-10-06 11:30:28.881416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.315 [2024-10-06 11:30:28.881484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.315 [2024-10-06 11:30:28.881499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.315 [2024-10-06 11:30:28.881505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.315 [2024-10-06 11:30:28.881511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.315 [2024-10-06 11:30:28.881526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.315 qpair failed and we were unable to recover it. 00:35:31.575 [2024-10-06 11:30:28.891433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.575 [2024-10-06 11:30:28.891509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.575 [2024-10-06 11:30:28.891524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.575 [2024-10-06 11:30:28.891530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.575 [2024-10-06 11:30:28.891536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.575 [2024-10-06 11:30:28.891550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.575 qpair failed and we were unable to recover it. 00:35:31.575 [2024-10-06 11:30:28.901459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.575 [2024-10-06 11:30:28.901520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.575 [2024-10-06 11:30:28.901535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.575 [2024-10-06 11:30:28.901541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.575 [2024-10-06 11:30:28.901547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.575 [2024-10-06 11:30:28.901561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.575 qpair failed and we were unable to recover it. 00:35:31.575 [2024-10-06 11:30:28.911473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.575 [2024-10-06 11:30:28.911538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.575 [2024-10-06 11:30:28.911553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.575 [2024-10-06 11:30:28.911560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.575 [2024-10-06 11:30:28.911566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.575 [2024-10-06 11:30:28.911581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.575 qpair failed and we were unable to recover it. 00:35:31.575 [2024-10-06 11:30:28.921527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.575 [2024-10-06 11:30:28.921632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.575 [2024-10-06 11:30:28.921646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.575 [2024-10-06 11:30:28.921652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.575 [2024-10-06 11:30:28.921658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.575 [2024-10-06 11:30:28.921673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.575 qpair failed and we were unable to recover it. 00:35:31.575 [2024-10-06 11:30:28.931483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.575 [2024-10-06 11:30:28.931549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.575 [2024-10-06 11:30:28.931568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.575 [2024-10-06 11:30:28.931578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.576 [2024-10-06 11:30:28.931583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.576 [2024-10-06 11:30:28.931599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.576 qpair failed and we were unable to recover it. 00:35:31.576 [2024-10-06 11:30:28.941530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.576 [2024-10-06 11:30:28.941589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.576 [2024-10-06 11:30:28.941605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.576 [2024-10-06 11:30:28.941611] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.576 [2024-10-06 11:30:28.941617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.576 [2024-10-06 11:30:28.941633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.576 qpair failed and we were unable to recover it. 00:35:31.576 [2024-10-06 11:30:28.951594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.576 [2024-10-06 11:30:28.951661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.576 [2024-10-06 11:30:28.951676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.576 [2024-10-06 11:30:28.951682] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.576 [2024-10-06 11:30:28.951688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.576 [2024-10-06 11:30:28.951703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.576 qpair failed and we were unable to recover it. 00:35:31.576 [2024-10-06 11:30:28.961624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.576 [2024-10-06 11:30:28.961691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.576 [2024-10-06 11:30:28.961706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.576 [2024-10-06 11:30:28.961712] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.576 [2024-10-06 11:30:28.961718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.576 [2024-10-06 11:30:28.961733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.576 qpair failed and we were unable to recover it. 00:35:31.576 [2024-10-06 11:30:28.971605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.576 [2024-10-06 11:30:28.971668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.576 [2024-10-06 11:30:28.971682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.576 [2024-10-06 11:30:28.971688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.576 [2024-10-06 11:30:28.971694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.576 [2024-10-06 11:30:28.971709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.576 qpair failed and we were unable to recover it. 00:35:31.576 [2024-10-06 11:30:28.981644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.576 [2024-10-06 11:30:28.981744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.576 [2024-10-06 11:30:28.981759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.576 [2024-10-06 11:30:28.981765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.576 [2024-10-06 11:30:28.981772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.576 [2024-10-06 11:30:28.981786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.576 qpair failed and we were unable to recover it. 00:35:31.576 [2024-10-06 11:30:28.991670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.576 [2024-10-06 11:30:28.991739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.576 [2024-10-06 11:30:28.991753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.576 [2024-10-06 11:30:28.991760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.576 [2024-10-06 11:30:28.991766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.576 [2024-10-06 11:30:28.991780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.576 qpair failed and we were unable to recover it. 00:35:31.576 [2024-10-06 11:30:29.001645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.576 [2024-10-06 11:30:29.001715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.576 [2024-10-06 11:30:29.001729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.576 [2024-10-06 11:30:29.001736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.576 [2024-10-06 11:30:29.001742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.576 [2024-10-06 11:30:29.001757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.576 qpair failed and we were unable to recover it. 00:35:31.576 [2024-10-06 11:30:29.011730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.576 [2024-10-06 11:30:29.011791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.576 [2024-10-06 11:30:29.011805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.576 [2024-10-06 11:30:29.011812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.576 [2024-10-06 11:30:29.011818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.576 [2024-10-06 11:30:29.011832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.576 qpair failed and we were unable to recover it. 00:35:31.576 [2024-10-06 11:30:29.021673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.576 [2024-10-06 11:30:29.021738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.576 [2024-10-06 11:30:29.021753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.576 [2024-10-06 11:30:29.021766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.576 [2024-10-06 11:30:29.021771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.576 [2024-10-06 11:30:29.021786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.576 qpair failed and we were unable to recover it. 00:35:31.576 [2024-10-06 11:30:29.031826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.576 [2024-10-06 11:30:29.031892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.576 [2024-10-06 11:30:29.031907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.576 [2024-10-06 11:30:29.031913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.576 [2024-10-06 11:30:29.031919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.576 [2024-10-06 11:30:29.031933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.576 qpair failed and we were unable to recover it. 00:35:31.576 [2024-10-06 11:30:29.041792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.576 [2024-10-06 11:30:29.041857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.576 [2024-10-06 11:30:29.041872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.576 [2024-10-06 11:30:29.041878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.576 [2024-10-06 11:30:29.041884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.576 [2024-10-06 11:30:29.041898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.576 qpair failed and we were unable to recover it. 00:35:31.576 [2024-10-06 11:30:29.051836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.576 [2024-10-06 11:30:29.051904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.576 [2024-10-06 11:30:29.051919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.576 [2024-10-06 11:30:29.051926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.576 [2024-10-06 11:30:29.051932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.576 [2024-10-06 11:30:29.051946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.576 qpair failed and we were unable to recover it. 00:35:31.576 [2024-10-06 11:30:29.061855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.576 [2024-10-06 11:30:29.061929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.576 [2024-10-06 11:30:29.061944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.576 [2024-10-06 11:30:29.061951] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.577 [2024-10-06 11:30:29.061957] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.577 [2024-10-06 11:30:29.061971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.577 qpair failed and we were unable to recover it. 00:35:31.577 [2024-10-06 11:30:29.071912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.577 [2024-10-06 11:30:29.071988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.577 [2024-10-06 11:30:29.072002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.577 [2024-10-06 11:30:29.072009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.577 [2024-10-06 11:30:29.072014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.577 [2024-10-06 11:30:29.072029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.577 qpair failed and we were unable to recover it. 00:35:31.577 [2024-10-06 11:30:29.081931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.577 [2024-10-06 11:30:29.082010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.577 [2024-10-06 11:30:29.082025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.577 [2024-10-06 11:30:29.082031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.577 [2024-10-06 11:30:29.082037] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.577 [2024-10-06 11:30:29.082052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.577 qpair failed and we were unable to recover it. 00:35:31.577 [2024-10-06 11:30:29.091984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.577 [2024-10-06 11:30:29.092051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.577 [2024-10-06 11:30:29.092069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.577 [2024-10-06 11:30:29.092076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.577 [2024-10-06 11:30:29.092082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.577 [2024-10-06 11:30:29.092098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.577 qpair failed and we were unable to recover it. 00:35:31.577 [2024-10-06 11:30:29.101955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.577 [2024-10-06 11:30:29.102018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.577 [2024-10-06 11:30:29.102033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.577 [2024-10-06 11:30:29.102040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.577 [2024-10-06 11:30:29.102045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.577 [2024-10-06 11:30:29.102063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.577 qpair failed and we were unable to recover it. 00:35:31.577 [2024-10-06 11:30:29.112023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.577 [2024-10-06 11:30:29.112090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.577 [2024-10-06 11:30:29.112108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.577 [2024-10-06 11:30:29.112115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.577 [2024-10-06 11:30:29.112120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.577 [2024-10-06 11:30:29.112135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.577 qpair failed and we were unable to recover it. 00:35:31.577 [2024-10-06 11:30:29.122026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.577 [2024-10-06 11:30:29.122094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.577 [2024-10-06 11:30:29.122108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.577 [2024-10-06 11:30:29.122115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.577 [2024-10-06 11:30:29.122121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.577 [2024-10-06 11:30:29.122136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.577 qpair failed and we were unable to recover it. 00:35:31.577 [2024-10-06 11:30:29.132065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.577 [2024-10-06 11:30:29.132140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.577 [2024-10-06 11:30:29.132155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.577 [2024-10-06 11:30:29.132161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.577 [2024-10-06 11:30:29.132167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.577 [2024-10-06 11:30:29.132182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.577 qpair failed and we were unable to recover it. 00:35:31.577 [2024-10-06 11:30:29.142088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.577 [2024-10-06 11:30:29.142166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.577 [2024-10-06 11:30:29.142181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.577 [2024-10-06 11:30:29.142187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.577 [2024-10-06 11:30:29.142193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.577 [2024-10-06 11:30:29.142208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.577 qpair failed and we were unable to recover it. 00:35:31.838 [2024-10-06 11:30:29.152131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.838 [2024-10-06 11:30:29.152212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.838 [2024-10-06 11:30:29.152227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.838 [2024-10-06 11:30:29.152233] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.838 [2024-10-06 11:30:29.152239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.838 [2024-10-06 11:30:29.152257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.838 qpair failed and we were unable to recover it. 00:35:31.838 [2024-10-06 11:30:29.162205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.838 [2024-10-06 11:30:29.162271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.838 [2024-10-06 11:30:29.162286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.838 [2024-10-06 11:30:29.162292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.838 [2024-10-06 11:30:29.162298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.838 [2024-10-06 11:30:29.162313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.838 qpair failed and we were unable to recover it. 00:35:31.838 [2024-10-06 11:30:29.172172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.838 [2024-10-06 11:30:29.172246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.838 [2024-10-06 11:30:29.172260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.838 [2024-10-06 11:30:29.172266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.838 [2024-10-06 11:30:29.172272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.839 [2024-10-06 11:30:29.172286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.839 qpair failed and we were unable to recover it. 00:35:31.839 [2024-10-06 11:30:29.182216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.839 [2024-10-06 11:30:29.182282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.839 [2024-10-06 11:30:29.182296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.839 [2024-10-06 11:30:29.182302] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.839 [2024-10-06 11:30:29.182308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.839 [2024-10-06 11:30:29.182323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.839 qpair failed and we were unable to recover it. 00:35:31.839 [2024-10-06 11:30:29.192291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.839 [2024-10-06 11:30:29.192359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.839 [2024-10-06 11:30:29.192374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.839 [2024-10-06 11:30:29.192380] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.839 [2024-10-06 11:30:29.192386] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.839 [2024-10-06 11:30:29.192401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.839 qpair failed and we were unable to recover it. 00:35:31.839 [2024-10-06 11:30:29.202260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.839 [2024-10-06 11:30:29.202327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.839 [2024-10-06 11:30:29.202345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.839 [2024-10-06 11:30:29.202352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.839 [2024-10-06 11:30:29.202358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.839 [2024-10-06 11:30:29.202372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.839 qpair failed and we were unable to recover it. 00:35:31.839 [2024-10-06 11:30:29.212246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.839 [2024-10-06 11:30:29.212348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.839 [2024-10-06 11:30:29.212363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.839 [2024-10-06 11:30:29.212369] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.839 [2024-10-06 11:30:29.212375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.839 [2024-10-06 11:30:29.212390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.839 qpair failed and we were unable to recover it. 00:35:31.839 [2024-10-06 11:30:29.222322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.839 [2024-10-06 11:30:29.222391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.839 [2024-10-06 11:30:29.222406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.839 [2024-10-06 11:30:29.222412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.839 [2024-10-06 11:30:29.222418] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.839 [2024-10-06 11:30:29.222433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.839 qpair failed and we were unable to recover it. 00:35:31.839 [2024-10-06 11:30:29.232361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.839 [2024-10-06 11:30:29.232434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.839 [2024-10-06 11:30:29.232449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.839 [2024-10-06 11:30:29.232455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.839 [2024-10-06 11:30:29.232461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.839 [2024-10-06 11:30:29.232476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.839 qpair failed and we were unable to recover it. 00:35:31.839 [2024-10-06 11:30:29.242389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.839 [2024-10-06 11:30:29.242467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.839 [2024-10-06 11:30:29.242481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.839 [2024-10-06 11:30:29.242488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.839 [2024-10-06 11:30:29.242493] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.839 [2024-10-06 11:30:29.242511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.839 qpair failed and we were unable to recover it. 00:35:31.839 [2024-10-06 11:30:29.252330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.839 [2024-10-06 11:30:29.252392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.839 [2024-10-06 11:30:29.252407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.839 [2024-10-06 11:30:29.252413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.839 [2024-10-06 11:30:29.252419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.839 [2024-10-06 11:30:29.252434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.839 qpair failed and we were unable to recover it. 00:35:31.839 [2024-10-06 11:30:29.262452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.839 [2024-10-06 11:30:29.262525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.839 [2024-10-06 11:30:29.262539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.839 [2024-10-06 11:30:29.262545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.839 [2024-10-06 11:30:29.262551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.839 [2024-10-06 11:30:29.262566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.839 qpair failed and we were unable to recover it. 00:35:31.839 [2024-10-06 11:30:29.272518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.839 [2024-10-06 11:30:29.272616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.839 [2024-10-06 11:30:29.272630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.839 [2024-10-06 11:30:29.272636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.839 [2024-10-06 11:30:29.272642] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.839 [2024-10-06 11:30:29.272657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.839 qpair failed and we were unable to recover it. 00:35:31.839 [2024-10-06 11:30:29.282483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.839 [2024-10-06 11:30:29.282550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.839 [2024-10-06 11:30:29.282564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.839 [2024-10-06 11:30:29.282570] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.839 [2024-10-06 11:30:29.282576] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.839 [2024-10-06 11:30:29.282590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.839 qpair failed and we were unable to recover it. 00:35:31.839 [2024-10-06 11:30:29.292525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.839 [2024-10-06 11:30:29.292600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.839 [2024-10-06 11:30:29.292617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.839 [2024-10-06 11:30:29.292624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.839 [2024-10-06 11:30:29.292629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.839 [2024-10-06 11:30:29.292644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.839 qpair failed and we were unable to recover it. 00:35:31.839 [2024-10-06 11:30:29.302566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.839 [2024-10-06 11:30:29.302628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.839 [2024-10-06 11:30:29.302643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.839 [2024-10-06 11:30:29.302649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.839 [2024-10-06 11:30:29.302655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.840 [2024-10-06 11:30:29.302669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.840 qpair failed and we were unable to recover it. 00:35:31.840 [2024-10-06 11:30:29.312621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.840 [2024-10-06 11:30:29.312689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.840 [2024-10-06 11:30:29.312703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.840 [2024-10-06 11:30:29.312710] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.840 [2024-10-06 11:30:29.312715] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.840 [2024-10-06 11:30:29.312730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.840 qpair failed and we were unable to recover it. 00:35:31.840 [2024-10-06 11:30:29.322603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.840 [2024-10-06 11:30:29.322668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.840 [2024-10-06 11:30:29.322683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.840 [2024-10-06 11:30:29.322689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.840 [2024-10-06 11:30:29.322695] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.840 [2024-10-06 11:30:29.322710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.840 qpair failed and we were unable to recover it. 00:35:31.840 [2024-10-06 11:30:29.332668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.840 [2024-10-06 11:30:29.332776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.840 [2024-10-06 11:30:29.332790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.840 [2024-10-06 11:30:29.332797] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.840 [2024-10-06 11:30:29.332806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.840 [2024-10-06 11:30:29.332821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.840 qpair failed and we were unable to recover it. 00:35:31.840 [2024-10-06 11:30:29.342603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.840 [2024-10-06 11:30:29.342669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.840 [2024-10-06 11:30:29.342684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.840 [2024-10-06 11:30:29.342690] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.840 [2024-10-06 11:30:29.342696] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.840 [2024-10-06 11:30:29.342711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.840 qpair failed and we were unable to recover it. 00:35:31.840 [2024-10-06 11:30:29.352725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.840 [2024-10-06 11:30:29.352826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.840 [2024-10-06 11:30:29.352840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.840 [2024-10-06 11:30:29.352846] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.840 [2024-10-06 11:30:29.352852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.840 [2024-10-06 11:30:29.352867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.840 qpair failed and we were unable to recover it. 00:35:31.840 [2024-10-06 11:30:29.362714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.840 [2024-10-06 11:30:29.362780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.840 [2024-10-06 11:30:29.362794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.840 [2024-10-06 11:30:29.362801] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.840 [2024-10-06 11:30:29.362807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.840 [2024-10-06 11:30:29.362821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.840 qpair failed and we were unable to recover it. 00:35:31.840 [2024-10-06 11:30:29.372769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.840 [2024-10-06 11:30:29.372833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.840 [2024-10-06 11:30:29.372847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.840 [2024-10-06 11:30:29.372854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.840 [2024-10-06 11:30:29.372860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.840 [2024-10-06 11:30:29.372874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.840 qpair failed and we were unable to recover it. 00:35:31.840 [2024-10-06 11:30:29.382784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.840 [2024-10-06 11:30:29.382845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.840 [2024-10-06 11:30:29.382859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.840 [2024-10-06 11:30:29.382866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.840 [2024-10-06 11:30:29.382871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.840 [2024-10-06 11:30:29.382886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.840 qpair failed and we were unable to recover it. 00:35:31.840 [2024-10-06 11:30:29.392811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.840 [2024-10-06 11:30:29.392877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.840 [2024-10-06 11:30:29.392891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.840 [2024-10-06 11:30:29.392897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.840 [2024-10-06 11:30:29.392903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.840 [2024-10-06 11:30:29.392918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.840 qpair failed and we were unable to recover it. 00:35:31.840 [2024-10-06 11:30:29.402841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:31.840 [2024-10-06 11:30:29.402907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:31.840 [2024-10-06 11:30:29.402921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:31.840 [2024-10-06 11:30:29.402928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:31.840 [2024-10-06 11:30:29.402934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:31.840 [2024-10-06 11:30:29.402949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.840 qpair failed and we were unable to recover it. 00:35:32.101 [2024-10-06 11:30:29.412862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.101 [2024-10-06 11:30:29.412931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.101 [2024-10-06 11:30:29.412946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.101 [2024-10-06 11:30:29.412953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.101 [2024-10-06 11:30:29.412959] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.101 [2024-10-06 11:30:29.412975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-10-06 11:30:29.422889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.101 [2024-10-06 11:30:29.422955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.101 [2024-10-06 11:30:29.422969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.101 [2024-10-06 11:30:29.422976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.101 [2024-10-06 11:30:29.422985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.102 [2024-10-06 11:30:29.423001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-10-06 11:30:29.432970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.102 [2024-10-06 11:30:29.433072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.102 [2024-10-06 11:30:29.433087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.102 [2024-10-06 11:30:29.433094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.102 [2024-10-06 11:30:29.433100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.102 [2024-10-06 11:30:29.433115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-10-06 11:30:29.442963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.102 [2024-10-06 11:30:29.443075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.102 [2024-10-06 11:30:29.443089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.102 [2024-10-06 11:30:29.443095] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.102 [2024-10-06 11:30:29.443102] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.102 [2024-10-06 11:30:29.443117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-10-06 11:30:29.452975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.102 [2024-10-06 11:30:29.453037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.102 [2024-10-06 11:30:29.453051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.102 [2024-10-06 11:30:29.453061] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.102 [2024-10-06 11:30:29.453067] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.102 [2024-10-06 11:30:29.453082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-10-06 11:30:29.463021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.102 [2024-10-06 11:30:29.463130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.102 [2024-10-06 11:30:29.463152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.102 [2024-10-06 11:30:29.463159] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.102 [2024-10-06 11:30:29.463165] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.102 [2024-10-06 11:30:29.463181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-10-06 11:30:29.472974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.102 [2024-10-06 11:30:29.473042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.102 [2024-10-06 11:30:29.473056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.102 [2024-10-06 11:30:29.473067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.102 [2024-10-06 11:30:29.473073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.102 [2024-10-06 11:30:29.473087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-10-06 11:30:29.483054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.102 [2024-10-06 11:30:29.483176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.102 [2024-10-06 11:30:29.483191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.102 [2024-10-06 11:30:29.483197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.102 [2024-10-06 11:30:29.483203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.102 [2024-10-06 11:30:29.483218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-10-06 11:30:29.493121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.102 [2024-10-06 11:30:29.493181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.102 [2024-10-06 11:30:29.493196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.102 [2024-10-06 11:30:29.493203] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.102 [2024-10-06 11:30:29.493208] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.102 [2024-10-06 11:30:29.493223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-10-06 11:30:29.503113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.102 [2024-10-06 11:30:29.503180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.102 [2024-10-06 11:30:29.503194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.102 [2024-10-06 11:30:29.503201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.102 [2024-10-06 11:30:29.503207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.102 [2024-10-06 11:30:29.503221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-10-06 11:30:29.513114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.102 [2024-10-06 11:30:29.513201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.102 [2024-10-06 11:30:29.513216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.102 [2024-10-06 11:30:29.513225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.102 [2024-10-06 11:30:29.513231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.102 [2024-10-06 11:30:29.513245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-10-06 11:30:29.523133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.102 [2024-10-06 11:30:29.523204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.102 [2024-10-06 11:30:29.523219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.102 [2024-10-06 11:30:29.523225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.102 [2024-10-06 11:30:29.523231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.102 [2024-10-06 11:30:29.523246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-10-06 11:30:29.533262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.102 [2024-10-06 11:30:29.533355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.102 [2024-10-06 11:30:29.533369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.102 [2024-10-06 11:30:29.533375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.102 [2024-10-06 11:30:29.533381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.102 [2024-10-06 11:30:29.533395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-10-06 11:30:29.543212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.102 [2024-10-06 11:30:29.543282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.102 [2024-10-06 11:30:29.543296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.102 [2024-10-06 11:30:29.543303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.102 [2024-10-06 11:30:29.543308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.102 [2024-10-06 11:30:29.543323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-10-06 11:30:29.553282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.102 [2024-10-06 11:30:29.553349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.102 [2024-10-06 11:30:29.553363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.103 [2024-10-06 11:30:29.553370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.103 [2024-10-06 11:30:29.553376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.103 [2024-10-06 11:30:29.553390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-10-06 11:30:29.563323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.103 [2024-10-06 11:30:29.563389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.103 [2024-10-06 11:30:29.563403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.103 [2024-10-06 11:30:29.563409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.103 [2024-10-06 11:30:29.563416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.103 [2024-10-06 11:30:29.563430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-10-06 11:30:29.573324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.103 [2024-10-06 11:30:29.573432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.103 [2024-10-06 11:30:29.573454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.103 [2024-10-06 11:30:29.573461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.103 [2024-10-06 11:30:29.573466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.103 [2024-10-06 11:30:29.573481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-10-06 11:30:29.583329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.103 [2024-10-06 11:30:29.583395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.103 [2024-10-06 11:30:29.583410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.103 [2024-10-06 11:30:29.583416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.103 [2024-10-06 11:30:29.583422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.103 [2024-10-06 11:30:29.583437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-10-06 11:30:29.593375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.103 [2024-10-06 11:30:29.593439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.103 [2024-10-06 11:30:29.593453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.103 [2024-10-06 11:30:29.593460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.103 [2024-10-06 11:30:29.593466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.103 [2024-10-06 11:30:29.593480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-10-06 11:30:29.603411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.103 [2024-10-06 11:30:29.603516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.103 [2024-10-06 11:30:29.603530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.103 [2024-10-06 11:30:29.603540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.103 [2024-10-06 11:30:29.603546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.103 [2024-10-06 11:30:29.603560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-10-06 11:30:29.613431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.103 [2024-10-06 11:30:29.613512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.103 [2024-10-06 11:30:29.613526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.103 [2024-10-06 11:30:29.613533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.103 [2024-10-06 11:30:29.613538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.103 [2024-10-06 11:30:29.613553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-10-06 11:30:29.623434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.103 [2024-10-06 11:30:29.623535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.103 [2024-10-06 11:30:29.623549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.103 [2024-10-06 11:30:29.623556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.103 [2024-10-06 11:30:29.623562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.103 [2024-10-06 11:30:29.623576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-10-06 11:30:29.633454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.103 [2024-10-06 11:30:29.633521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.103 [2024-10-06 11:30:29.633535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.103 [2024-10-06 11:30:29.633541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.103 [2024-10-06 11:30:29.633547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.103 [2024-10-06 11:30:29.633561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-10-06 11:30:29.643491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.103 [2024-10-06 11:30:29.643555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.103 [2024-10-06 11:30:29.643570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.103 [2024-10-06 11:30:29.643577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.103 [2024-10-06 11:30:29.643582] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.103 [2024-10-06 11:30:29.643597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-10-06 11:30:29.653542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.103 [2024-10-06 11:30:29.653608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.103 [2024-10-06 11:30:29.653622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.103 [2024-10-06 11:30:29.653629] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.103 [2024-10-06 11:30:29.653635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.103 [2024-10-06 11:30:29.653649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-10-06 11:30:29.663554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.103 [2024-10-06 11:30:29.663614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.103 [2024-10-06 11:30:29.663628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.103 [2024-10-06 11:30:29.663634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.103 [2024-10-06 11:30:29.663641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.103 [2024-10-06 11:30:29.663655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-10-06 11:30:29.673595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.103 [2024-10-06 11:30:29.673704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.103 [2024-10-06 11:30:29.673718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.103 [2024-10-06 11:30:29.673724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.103 [2024-10-06 11:30:29.673731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.103 [2024-10-06 11:30:29.673745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.363 [2024-10-06 11:30:29.683622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.363 [2024-10-06 11:30:29.683687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.363 [2024-10-06 11:30:29.683702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.363 [2024-10-06 11:30:29.683709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.363 [2024-10-06 11:30:29.683715] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.363 [2024-10-06 11:30:29.683730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.363 qpair failed and we were unable to recover it. 00:35:32.363 [2024-10-06 11:30:29.693638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.363 [2024-10-06 11:30:29.693704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.363 [2024-10-06 11:30:29.693722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.363 [2024-10-06 11:30:29.693729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.363 [2024-10-06 11:30:29.693735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.364 [2024-10-06 11:30:29.693750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.364 qpair failed and we were unable to recover it. 00:35:32.364 [2024-10-06 11:30:29.703665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.364 [2024-10-06 11:30:29.703731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.364 [2024-10-06 11:30:29.703746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.364 [2024-10-06 11:30:29.703753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.364 [2024-10-06 11:30:29.703759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.364 [2024-10-06 11:30:29.703774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.364 qpair failed and we were unable to recover it. 00:35:32.364 [2024-10-06 11:30:29.713747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.364 [2024-10-06 11:30:29.713830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.364 [2024-10-06 11:30:29.713845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.364 [2024-10-06 11:30:29.713851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.364 [2024-10-06 11:30:29.713857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.364 [2024-10-06 11:30:29.713872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.364 qpair failed and we were unable to recover it. 00:35:32.364 [2024-10-06 11:30:29.723682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.364 [2024-10-06 11:30:29.723744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.364 [2024-10-06 11:30:29.723759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.364 [2024-10-06 11:30:29.723766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.364 [2024-10-06 11:30:29.723772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.364 [2024-10-06 11:30:29.723787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.364 qpair failed and we were unable to recover it. 00:35:32.364 [2024-10-06 11:30:29.733808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.364 [2024-10-06 11:30:29.733873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.364 [2024-10-06 11:30:29.733887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.364 [2024-10-06 11:30:29.733893] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.364 [2024-10-06 11:30:29.733900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.364 [2024-10-06 11:30:29.733920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.364 qpair failed and we were unable to recover it. 00:35:32.364 [2024-10-06 11:30:29.743719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.364 [2024-10-06 11:30:29.743785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.364 [2024-10-06 11:30:29.743799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.364 [2024-10-06 11:30:29.743806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.364 [2024-10-06 11:30:29.743812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.364 [2024-10-06 11:30:29.743826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.364 qpair failed and we were unable to recover it. 00:35:32.364 [2024-10-06 11:30:29.753843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.364 [2024-10-06 11:30:29.753914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.364 [2024-10-06 11:30:29.753929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.364 [2024-10-06 11:30:29.753936] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.364 [2024-10-06 11:30:29.753942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.364 [2024-10-06 11:30:29.753957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.364 qpair failed and we were unable to recover it. 00:35:32.364 [2024-10-06 11:30:29.763863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.364 [2024-10-06 11:30:29.763956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.364 [2024-10-06 11:30:29.763971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.364 [2024-10-06 11:30:29.763977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.364 [2024-10-06 11:30:29.763983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.364 [2024-10-06 11:30:29.763998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.364 qpair failed and we were unable to recover it. 00:35:32.364 [2024-10-06 11:30:29.773873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.364 [2024-10-06 11:30:29.773940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.364 [2024-10-06 11:30:29.773954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.364 [2024-10-06 11:30:29.773961] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.364 [2024-10-06 11:30:29.773967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.364 [2024-10-06 11:30:29.773981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.364 qpair failed and we were unable to recover it. 00:35:32.364 [2024-10-06 11:30:29.783907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.364 [2024-10-06 11:30:29.783967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.364 [2024-10-06 11:30:29.783985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.364 [2024-10-06 11:30:29.783992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.364 [2024-10-06 11:30:29.783997] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.364 [2024-10-06 11:30:29.784012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.364 qpair failed and we were unable to recover it. 00:35:32.364 [2024-10-06 11:30:29.793940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:32.364 [2024-10-06 11:30:29.794008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:32.364 [2024-10-06 11:30:29.794022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:32.364 [2024-10-06 11:30:29.794028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:32.364 [2024-10-06 11:30:29.794034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f81cc000b90 00:35:32.364 [2024-10-06 11:30:29.794049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.364 qpair failed and we were unable to recover it. 00:35:32.364 [2024-10-06 11:30:29.794073] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:35:32.364 A controller has encountered a failure and is being reset. 00:35:32.364 Controller properly reset. 00:35:33.743 Initializing NVMe Controllers 00:35:33.743 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:33.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:33.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:33.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:33.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:33.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:33.743 Initialization complete. Launching workers. 00:35:33.743 Starting thread on core 1 00:35:33.743 Starting thread on core 2 00:35:33.743 Starting thread on core 3 00:35:33.743 Starting thread on core 0 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:33.743 00:35:33.743 real 0m10.576s 00:35:33.743 user 0m22.188s 00:35:33.743 sys 0m4.503s 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:33.743 ************************************ 00:35:33.743 END TEST nvmf_target_disconnect_tc2 00:35:33.743 ************************************ 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:33.743 rmmod nvme_tcp 00:35:33.743 rmmod nvme_fabrics 00:35:33.743 rmmod nvme_keyring 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 2272804 ']' 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 2272804 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2272804 ']' 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2272804 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2272804 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:35:33.743 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2272804' 00:35:33.743 killing process with pid 2272804 00:35:33.744 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2272804 00:35:33.744 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2272804 00:35:34.003 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:34.003 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:34.003 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:34.003 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:35:34.003 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:35:34.003 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:34.003 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:35:34.003 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:34.003 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:34.003 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.003 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:34.003 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:35.909 11:30:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:35.910 00:35:35.910 real 0m18.435s 00:35:35.910 user 0m48.823s 00:35:35.910 sys 0m8.805s 00:35:35.910 11:30:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:35.910 11:30:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:35.910 ************************************ 00:35:35.910 END TEST nvmf_target_disconnect 00:35:35.910 ************************************ 00:35:36.170 11:30:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:36.170 00:35:36.170 real 7m9.822s 00:35:36.170 user 16m45.784s 00:35:36.170 sys 2m0.098s 00:35:36.170 11:30:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:36.170 11:30:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.170 ************************************ 00:35:36.170 END TEST nvmf_host 00:35:36.170 ************************************ 00:35:36.170 11:30:33 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:35:36.170 11:30:33 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:35:36.170 11:30:33 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:36.170 11:30:33 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:36.170 11:30:33 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:36.170 11:30:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:36.170 ************************************ 00:35:36.170 START TEST nvmf_target_core_interrupt_mode 00:35:36.170 ************************************ 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:36.170 * Looking for test storage... 00:35:36.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:36.170 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:36.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.431 --rc genhtml_branch_coverage=1 00:35:36.431 --rc genhtml_function_coverage=1 00:35:36.431 --rc genhtml_legend=1 00:35:36.431 --rc geninfo_all_blocks=1 00:35:36.431 --rc geninfo_unexecuted_blocks=1 00:35:36.431 00:35:36.431 ' 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:36.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.431 --rc genhtml_branch_coverage=1 00:35:36.431 --rc genhtml_function_coverage=1 00:35:36.431 --rc genhtml_legend=1 00:35:36.431 --rc geninfo_all_blocks=1 00:35:36.431 --rc geninfo_unexecuted_blocks=1 00:35:36.431 00:35:36.431 ' 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:36.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.431 --rc genhtml_branch_coverage=1 00:35:36.431 --rc genhtml_function_coverage=1 00:35:36.431 --rc genhtml_legend=1 00:35:36.431 --rc geninfo_all_blocks=1 00:35:36.431 --rc geninfo_unexecuted_blocks=1 00:35:36.431 00:35:36.431 ' 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:36.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.431 --rc genhtml_branch_coverage=1 00:35:36.431 --rc genhtml_function_coverage=1 00:35:36.431 --rc genhtml_legend=1 00:35:36.431 --rc geninfo_all_blocks=1 00:35:36.431 --rc geninfo_unexecuted_blocks=1 00:35:36.431 00:35:36.431 ' 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:36.431 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:36.432 ************************************ 00:35:36.432 START TEST nvmf_abort 00:35:36.432 ************************************ 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:36.432 * Looking for test storage... 00:35:36.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:36.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.432 --rc genhtml_branch_coverage=1 00:35:36.432 --rc genhtml_function_coverage=1 00:35:36.432 --rc genhtml_legend=1 00:35:36.432 --rc geninfo_all_blocks=1 00:35:36.432 --rc geninfo_unexecuted_blocks=1 00:35:36.432 00:35:36.432 ' 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:36.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.432 --rc genhtml_branch_coverage=1 00:35:36.432 --rc genhtml_function_coverage=1 00:35:36.432 --rc genhtml_legend=1 00:35:36.432 --rc geninfo_all_blocks=1 00:35:36.432 --rc geninfo_unexecuted_blocks=1 00:35:36.432 00:35:36.432 ' 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:36.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.432 --rc genhtml_branch_coverage=1 00:35:36.432 --rc genhtml_function_coverage=1 00:35:36.432 --rc genhtml_legend=1 00:35:36.432 --rc geninfo_all_blocks=1 00:35:36.432 --rc geninfo_unexecuted_blocks=1 00:35:36.432 00:35:36.432 ' 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:36.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.432 --rc genhtml_branch_coverage=1 00:35:36.432 --rc genhtml_function_coverage=1 00:35:36.432 --rc genhtml_legend=1 00:35:36.432 --rc geninfo_all_blocks=1 00:35:36.432 --rc geninfo_unexecuted_blocks=1 00:35:36.432 00:35:36.432 ' 00:35:36.432 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:36.432 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:35:36.693 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:41.969 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:41.969 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:41.970 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:41.970 Found net devices under 0000:af:00.0: cvl_0_0 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:41.970 Found net devices under 0000:af:00.1: cvl_0_1 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:41.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:41.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:35:41.970 00:35:41.970 --- 10.0.0.2 ping statistics --- 00:35:41.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.970 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:41.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:41.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:35:41.970 00:35:41.970 --- 10.0.0.1 ping statistics --- 00:35:41.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.970 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=2277261 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 2277261 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2277261 ']' 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:41.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:41.970 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:42.229 [2024-10-06 11:30:39.553359] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:42.229 [2024-10-06 11:30:39.554285] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:35:42.229 [2024-10-06 11:30:39.554330] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:42.230 [2024-10-06 11:30:39.612863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:42.230 [2024-10-06 11:30:39.651181] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:42.230 [2024-10-06 11:30:39.651222] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:42.230 [2024-10-06 11:30:39.651229] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:42.230 [2024-10-06 11:30:39.651235] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:42.230 [2024-10-06 11:30:39.651240] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:42.230 [2024-10-06 11:30:39.652134] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:35:42.230 [2024-10-06 11:30:39.652157] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:35:42.230 [2024-10-06 11:30:39.652159] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:42.230 [2024-10-06 11:30:39.720546] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:42.230 [2024-10-06 11:30:39.720692] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:42.230 [2024-10-06 11:30:39.720832] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:42.230 [2024-10-06 11:30:39.720981] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:42.230 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:42.230 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:35:42.230 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:42.230 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:42.230 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:42.230 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:42.230 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:35:42.230 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.230 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:42.230 [2024-10-06 11:30:39.784950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:42.230 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.230 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:35:42.230 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.230 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:42.489 Malloc0 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:42.489 Delay0 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:42.489 [2024-10-06 11:30:39.840910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.489 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:35:42.489 [2024-10-06 11:30:39.946859] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:45.026 Initializing NVMe Controllers 00:35:45.026 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:35:45.026 controller IO queue size 128 less than required 00:35:45.026 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:35:45.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:35:45.026 Initialization complete. Launching workers. 00:35:45.026 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37998 00:35:45.026 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38059, failed to submit 66 00:35:45.026 success 37998, unsuccessful 61, failed 0 00:35:45.026 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:45.026 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.026 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.026 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.026 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:35:45.026 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:35:45.026 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:45.026 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:35:45.026 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:45.026 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:35:45.026 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:45.026 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:45.026 rmmod nvme_tcp 00:35:45.026 rmmod nvme_fabrics 00:35:45.026 rmmod nvme_keyring 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 2277261 ']' 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 2277261 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2277261 ']' 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2277261 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2277261 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2277261' 00:35:45.026 killing process with pid 2277261 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2277261 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2277261 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:45.026 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:46.954 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:46.954 00:35:46.954 real 0m10.566s 00:35:46.954 user 0m9.932s 00:35:46.954 sys 0m5.336s 00:35:46.954 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:46.954 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:46.954 ************************************ 00:35:46.954 END TEST nvmf_abort 00:35:46.954 ************************************ 00:35:46.954 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:35:46.954 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:46.954 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:46.954 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:46.954 ************************************ 00:35:46.954 START TEST nvmf_ns_hotplug_stress 00:35:46.954 ************************************ 00:35:46.954 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:35:47.215 * Looking for test storage... 00:35:47.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:47.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.215 --rc genhtml_branch_coverage=1 00:35:47.215 --rc genhtml_function_coverage=1 00:35:47.215 --rc genhtml_legend=1 00:35:47.215 --rc geninfo_all_blocks=1 00:35:47.215 --rc geninfo_unexecuted_blocks=1 00:35:47.215 00:35:47.215 ' 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:47.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.215 --rc genhtml_branch_coverage=1 00:35:47.215 --rc genhtml_function_coverage=1 00:35:47.215 --rc genhtml_legend=1 00:35:47.215 --rc geninfo_all_blocks=1 00:35:47.215 --rc geninfo_unexecuted_blocks=1 00:35:47.215 00:35:47.215 ' 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:47.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.215 --rc genhtml_branch_coverage=1 00:35:47.215 --rc genhtml_function_coverage=1 00:35:47.215 --rc genhtml_legend=1 00:35:47.215 --rc geninfo_all_blocks=1 00:35:47.215 --rc geninfo_unexecuted_blocks=1 00:35:47.215 00:35:47.215 ' 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:47.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.215 --rc genhtml_branch_coverage=1 00:35:47.215 --rc genhtml_function_coverage=1 00:35:47.215 --rc genhtml_legend=1 00:35:47.215 --rc geninfo_all_blocks=1 00:35:47.215 --rc geninfo_unexecuted_blocks=1 00:35:47.215 00:35:47.215 ' 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.215 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:35:47.216 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:52.492 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:52.492 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:52.492 Found net devices under 0000:af:00.0: cvl_0_0 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:52.492 Found net devices under 0000:af:00.1: cvl_0_1 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:52.492 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:52.492 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:52.752 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:52.752 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:52.752 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:52.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:52.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:35:52.752 00:35:52.752 --- 10.0.0.2 ping statistics --- 00:35:52.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:52.752 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:35:52.752 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:52.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:52.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:35:52.752 00:35:52.752 --- 10.0.0.1 ping statistics --- 00:35:52.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:52.752 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:35:52.752 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:52.752 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:35:52.752 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:52.752 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:52.753 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:52.753 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:52.753 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:52.753 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:52.753 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:52.753 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:35:52.753 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:52.753 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:52.753 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:52.753 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=2281172 00:35:52.753 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:35:52.753 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 2281172 00:35:52.753 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2281172 ']' 00:35:52.753 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:52.753 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:52.753 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:52.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:52.753 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:52.753 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:52.753 [2024-10-06 11:30:50.182144] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:52.753 [2024-10-06 11:30:50.183043] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:35:52.753 [2024-10-06 11:30:50.183101] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:52.753 [2024-10-06 11:30:50.245005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:52.753 [2024-10-06 11:30:50.283520] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:52.753 [2024-10-06 11:30:50.283559] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:52.753 [2024-10-06 11:30:50.283566] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:52.753 [2024-10-06 11:30:50.283572] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:52.753 [2024-10-06 11:30:50.283577] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:52.753 [2024-10-06 11:30:50.284520] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:35:52.753 [2024-10-06 11:30:50.284541] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:35:52.753 [2024-10-06 11:30:50.284547] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:53.013 [2024-10-06 11:30:50.352859] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:53.013 [2024-10-06 11:30:50.352990] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:53.013 [2024-10-06 11:30:50.353227] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:53.013 [2024-10-06 11:30:50.353432] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:53.013 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:53.013 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:35:53.013 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:53.013 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:53.013 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:53.013 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:53.013 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:35:53.013 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:53.013 [2024-10-06 11:30:50.585390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:53.272 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:53.272 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:53.531 [2024-10-06 11:30:50.969627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:53.531 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:53.790 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:35:53.790 Malloc0 00:35:54.049 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:54.049 Delay0 00:35:54.049 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:54.309 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:35:54.567 NULL1 00:35:54.568 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:35:54.568 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2281430 00:35:54.568 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:35:54.568 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:35:54.568 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:55.945 Read completed with error (sct=0, sc=11) 00:35:55.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:55.945 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:55.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:55.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:55.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:55.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:55.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:56.204 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:35:56.204 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:35:56.204 true 00:35:56.462 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:35:56.462 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:57.029 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:57.288 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:35:57.288 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:35:57.547 true 00:35:57.547 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:35:57.547 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:57.806 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:57.806 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:35:57.806 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:35:58.066 true 00:35:58.066 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:35:58.066 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:59.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:59.261 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:59.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:59.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:59.261 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:35:59.261 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:35:59.521 true 00:35:59.521 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:35:59.521 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:59.781 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:00.040 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:00.040 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:00.040 true 00:36:00.040 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:00.040 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:01.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:01.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:01.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:01.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:01.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:01.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:01.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:01.682 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:01.682 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:01.682 true 00:36:01.682 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:01.682 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:02.619 11:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:02.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:02.878 11:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:02.878 11:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:02.878 true 00:36:02.878 11:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:02.878 11:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:03.138 11:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:03.397 11:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:03.397 11:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:03.656 true 00:36:03.656 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:03.656 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:04.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:04.592 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:04.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:04.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:04.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:04.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:04.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:04.851 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:04.851 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:05.110 true 00:36:05.110 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:05.110 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:06.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:06.047 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:06.047 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:06.047 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:06.306 true 00:36:06.306 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:06.306 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:06.565 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:06.565 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:06.565 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:06.825 true 00:36:06.825 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:06.825 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:08.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:08.203 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:08.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:08.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:08.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:08.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:08.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:08.203 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:08.203 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:08.462 true 00:36:08.462 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:08.462 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:09.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:09.398 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:09.398 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:09.398 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:09.656 true 00:36:09.656 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:09.656 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:09.915 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:10.174 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:10.174 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:10.174 true 00:36:10.174 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:10.174 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:11.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:11.551 11:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:11.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:11.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:11.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:11.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:11.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:11.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:11.551 11:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:11.551 11:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:11.811 true 00:36:11.811 11:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:11.811 11:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:12.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:12.748 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:12.748 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:12.748 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:13.006 true 00:36:13.006 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:13.006 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:13.266 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:13.266 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:13.266 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:13.525 true 00:36:13.525 11:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:13.525 11:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:14.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:14.904 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:14.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:14.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:14.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:14.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:14.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:14.904 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:14.904 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:15.164 true 00:36:15.164 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:15.164 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:16.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:16.101 11:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:16.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:16.101 11:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:16.101 11:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:16.361 true 00:36:16.361 11:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:16.361 11:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:16.620 11:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:16.879 11:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:16.879 11:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:16.879 true 00:36:16.879 11:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:16.879 11:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:18.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:18.257 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:18.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:18.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:18.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:18.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:18.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:18.257 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:18.257 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:18.516 true 00:36:18.516 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:18.516 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:19.451 11:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:19.451 11:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:19.451 11:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:19.710 true 00:36:19.710 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:19.710 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:19.969 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:20.229 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:20.229 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:20.229 true 00:36:20.229 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:20.229 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:21.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:21.606 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:21.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:21.606 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:21.606 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:21.606 true 00:36:21.865 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:21.865 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:21.865 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:22.124 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:22.124 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:22.383 true 00:36:22.383 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:22.383 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:23.317 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:23.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:23.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:23.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:23.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:23.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:23.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:23.576 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:23.576 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:23.835 true 00:36:23.835 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:23.835 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:24.770 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:24.770 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:24.770 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:25.029 Initializing NVMe Controllers 00:36:25.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:25.029 Controller IO queue size 128, less than required. 00:36:25.029 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:25.029 Controller IO queue size 128, less than required. 00:36:25.029 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:25.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:25.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:25.029 Initialization complete. Launching workers. 00:36:25.029 ======================================================== 00:36:25.029 Latency(us) 00:36:25.029 Device Information : IOPS MiB/s Average min max 00:36:25.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2096.65 1.02 42301.43 2487.56 1067873.95 00:36:25.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18289.36 8.93 6998.87 1442.85 361073.26 00:36:25.029 ======================================================== 00:36:25.029 Total : 20386.01 9.95 10629.65 1442.85 1067873.95 00:36:25.029 00:36:25.029 true 00:36:25.029 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2281430 00:36:25.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2281430) - No such process 00:36:25.029 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2281430 00:36:25.029 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:25.287 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:25.546 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:25.546 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:25.546 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:25.546 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:25.546 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:25.546 null0 00:36:25.546 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:25.546 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:25.546 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:25.804 null1 00:36:25.804 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:25.804 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:25.804 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:26.062 null2 00:36:26.062 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:26.062 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:26.062 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:26.062 null3 00:36:26.062 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:26.062 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:26.062 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:26.320 null4 00:36:26.320 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:26.320 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:26.320 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:26.580 null5 00:36:26.580 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:26.580 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:26.580 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:26.839 null6 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:26.840 null7 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2286594 2286596 2286599 2286602 2286606 2286608 2286609 2286612 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:26.840 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:27.099 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.099 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:27.099 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:27.099 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:27.099 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:27.099 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:27.099 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:27.099 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.358 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:27.359 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:27.359 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.359 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:27.618 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.618 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:27.618 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:27.618 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:27.618 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:27.618 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:27.618 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:27.618 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:27.618 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:27.618 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.618 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:27.618 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:27.618 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.618 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:27.877 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.135 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:28.394 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:28.394 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:28.394 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:28.394 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:28.394 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:28.394 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:28.394 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:28.394 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:28.652 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.910 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:28.911 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.911 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.911 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:29.169 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.169 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:29.169 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:29.169 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:29.169 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:29.169 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:29.169 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:29.169 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.429 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:29.688 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.688 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:29.688 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:29.688 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:29.688 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:29.688 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:29.688 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:29.688 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:29.947 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.947 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.947 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:29.947 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.947 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.947 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:29.947 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.947 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.947 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:29.947 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.947 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.947 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:29.947 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.947 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.948 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:29.948 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.948 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.948 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:29.948 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.948 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.948 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:29.948 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.948 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.948 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:29.948 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:29.948 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:29.948 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:29.948 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.948 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:29.948 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:29.948 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:29.948 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.207 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:30.466 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:30.466 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:30.466 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:30.466 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:30.466 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:30.466 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.466 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:30.466 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:30.726 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:30.983 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:30.983 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.983 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:30.983 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:30.983 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:30.983 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:30.984 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:30.984 rmmod nvme_tcp 00:36:31.242 rmmod nvme_fabrics 00:36:31.242 rmmod nvme_keyring 00:36:31.242 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:31.242 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:36:31.242 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:36:31.242 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 2281172 ']' 00:36:31.242 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 2281172 00:36:31.242 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2281172 ']' 00:36:31.242 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2281172 00:36:31.242 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:36:31.243 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:31.243 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2281172 00:36:31.243 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:31.243 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:31.243 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2281172' 00:36:31.243 killing process with pid 2281172 00:36:31.243 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2281172 00:36:31.243 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2281172 00:36:31.503 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:31.503 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:31.503 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:31.503 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:36:31.503 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:36:31.503 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:31.503 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:36:31.503 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:31.503 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:31.503 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:31.503 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:31.503 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.412 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:33.412 00:36:33.412 real 0m46.474s 00:36:33.412 user 2m54.450s 00:36:33.412 sys 0m20.188s 00:36:33.412 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:33.412 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:33.412 ************************************ 00:36:33.412 END TEST nvmf_ns_hotplug_stress 00:36:33.412 ************************************ 00:36:33.412 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:33.412 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:33.412 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:33.412 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:33.671 ************************************ 00:36:33.671 START TEST nvmf_delete_subsystem 00:36:33.671 ************************************ 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:33.671 * Looking for test storage... 00:36:33.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:33.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.671 --rc genhtml_branch_coverage=1 00:36:33.671 --rc genhtml_function_coverage=1 00:36:33.671 --rc genhtml_legend=1 00:36:33.671 --rc geninfo_all_blocks=1 00:36:33.671 --rc geninfo_unexecuted_blocks=1 00:36:33.671 00:36:33.671 ' 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:33.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.671 --rc genhtml_branch_coverage=1 00:36:33.671 --rc genhtml_function_coverage=1 00:36:33.671 --rc genhtml_legend=1 00:36:33.671 --rc geninfo_all_blocks=1 00:36:33.671 --rc geninfo_unexecuted_blocks=1 00:36:33.671 00:36:33.671 ' 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:33.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.671 --rc genhtml_branch_coverage=1 00:36:33.671 --rc genhtml_function_coverage=1 00:36:33.671 --rc genhtml_legend=1 00:36:33.671 --rc geninfo_all_blocks=1 00:36:33.671 --rc geninfo_unexecuted_blocks=1 00:36:33.671 00:36:33.671 ' 00:36:33.671 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:33.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.671 --rc genhtml_branch_coverage=1 00:36:33.671 --rc genhtml_function_coverage=1 00:36:33.671 --rc genhtml_legend=1 00:36:33.671 --rc geninfo_all_blocks=1 00:36:33.671 --rc geninfo_unexecuted_blocks=1 00:36:33.672 00:36:33.672 ' 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:36:33.672 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:38.955 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:38.956 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:38.956 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:38.956 Found net devices under 0000:af:00.0: cvl_0_0 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:38.956 Found net devices under 0000:af:00.1: cvl_0_1 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:38.956 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:39.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:39.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:36:39.216 00:36:39.216 --- 10.0.0.2 ping statistics --- 00:36:39.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:39.216 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:39.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:39.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:36:39.216 00:36:39.216 --- 10.0.0.1 ping statistics --- 00:36:39.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:39.216 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=2290691 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 2290691 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2290691 ']' 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:39.216 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:39.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:39.217 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:39.217 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.217 [2024-10-06 11:31:36.766618] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:39.217 [2024-10-06 11:31:36.767644] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:36:39.217 [2024-10-06 11:31:36.767680] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:39.476 [2024-10-06 11:31:36.829103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:39.476 [2024-10-06 11:31:36.869507] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:39.476 [2024-10-06 11:31:36.869547] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:39.476 [2024-10-06 11:31:36.869555] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:39.476 [2024-10-06 11:31:36.869561] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:39.476 [2024-10-06 11:31:36.869566] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:39.476 [2024-10-06 11:31:36.874079] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:39.476 [2024-10-06 11:31:36.874083] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:39.476 [2024-10-06 11:31:36.935806] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:39.476 [2024-10-06 11:31:36.936091] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:39.476 [2024-10-06 11:31:36.936136] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:39.476 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:39.476 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:36:39.477 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:39.477 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:39.477 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.477 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:39.477 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:39.477 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.477 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.477 [2024-10-06 11:31:37.002573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:39.477 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.477 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:39.477 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.477 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.477 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.477 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:39.477 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.477 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.477 [2024-10-06 11:31:37.034761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:39.477 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.477 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:36:39.477 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.477 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.477 NULL1 00:36:39.477 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.477 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:39.477 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.477 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.736 Delay0 00:36:39.736 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.736 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.736 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.736 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.736 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.736 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2290893 00:36:39.736 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:36:39.736 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:39.736 [2024-10-06 11:31:37.107394] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:41.765 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:41.766 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.766 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 [2024-10-06 11:31:39.268483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd0b50 is same with the state(6) to be set 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 starting I/O failed: -6 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 [2024-10-06 11:31:39.270492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f38bc000c00 is same with the state(6) to be set 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Write completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.766 Read completed with error (sct=0, sc=8) 00:36:41.767 Write completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Write completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Write completed with error (sct=0, sc=8) 00:36:41.767 Write completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 [2024-10-06 11:31:39.270857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f38bc00d640 is same with the state(6) to be set 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Write completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Write completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Write completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Write completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Write completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Write completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Write completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Write completed with error (sct=0, sc=8) 00:36:41.767 Read completed with error (sct=0, sc=8) 00:36:41.767 Write completed with error (sct=0, sc=8) 00:36:41.767 [2024-10-06 11:31:39.271084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f38bc00cfe0 is same with the state(6) to be set 00:36:42.705 [2024-10-06 11:31:40.246736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd3a80 is same with the state(6) to be set 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 [2024-10-06 11:31:40.270522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f38bc00d310 is same with the state(6) to be set 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 [2024-10-06 11:31:40.273093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd0820 is same with the state(6) to be set 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 [2024-10-06 11:31:40.273224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd0320 is same with the state(6) to be set 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 Write completed with error (sct=0, sc=8) 00:36:42.705 Read completed with error (sct=0, sc=8) 00:36:42.705 [2024-10-06 11:31:40.273873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd0e80 is same with the state(6) to be set 00:36:42.705 Initializing NVMe Controllers 00:36:42.705 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:42.705 Controller IO queue size 128, less than required. 00:36:42.705 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:42.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:42.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:42.706 Initialization complete. Launching workers. 00:36:42.706 ======================================================== 00:36:42.706 Latency(us) 00:36:42.706 Device Information : IOPS MiB/s Average min max 00:36:42.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.35 0.09 961007.46 740.43 1043341.89 00:36:42.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 147.54 0.07 908359.55 380.24 1011518.72 00:36:42.706 ======================================================== 00:36:42.706 Total : 323.90 0.16 937025.21 380.24 1043341.89 00:36:42.706 00:36:42.706 [2024-10-06 11:31:40.274530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd3a80 (9): Bad file descriptor 00:36:42.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:36:42.706 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.706 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:36:42.706 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2290893 00:36:42.706 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2290893 00:36:43.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2290893) - No such process 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2290893 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2290893 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2290893 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:43.273 [2024-10-06 11:31:40.798992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2291384 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2291384 00:36:43.273 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:43.531 [2024-10-06 11:31:40.853391] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:43.789 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:43.789 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2291384 00:36:43.789 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:44.355 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:44.355 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2291384 00:36:44.355 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:44.921 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:44.921 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2291384 00:36:44.921 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:45.489 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:45.489 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2291384 00:36:45.489 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:46.057 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:46.057 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2291384 00:36:46.057 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:46.316 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:46.316 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2291384 00:36:46.316 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:46.576 Initializing NVMe Controllers 00:36:46.576 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:46.576 Controller IO queue size 128, less than required. 00:36:46.576 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:46.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:46.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:46.576 Initialization complete. Launching workers. 00:36:46.576 ======================================================== 00:36:46.576 Latency(us) 00:36:46.576 Device Information : IOPS MiB/s Average min max 00:36:46.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004034.73 1000255.69 1042821.77 00:36:46.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004822.75 1000232.53 1011492.58 00:36:46.576 ======================================================== 00:36:46.576 Total : 256.00 0.12 1004428.74 1000232.53 1042821.77 00:36:46.576 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2291384 00:36:46.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2291384) - No such process 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2291384 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:46.836 rmmod nvme_tcp 00:36:46.836 rmmod nvme_fabrics 00:36:46.836 rmmod nvme_keyring 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 2290691 ']' 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 2290691 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2290691 ']' 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2290691 00:36:46.836 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2290691 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2290691' 00:36:47.096 killing process with pid 2290691 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2290691 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2290691 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:47.096 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:49.634 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:49.634 00:36:49.634 real 0m15.725s 00:36:49.634 user 0m25.943s 00:36:49.635 sys 0m5.793s 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:49.635 ************************************ 00:36:49.635 END TEST nvmf_delete_subsystem 00:36:49.635 ************************************ 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:49.635 ************************************ 00:36:49.635 START TEST nvmf_host_management 00:36:49.635 ************************************ 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:36:49.635 * Looking for test storage... 00:36:49.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:49.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:49.635 --rc genhtml_branch_coverage=1 00:36:49.635 --rc genhtml_function_coverage=1 00:36:49.635 --rc genhtml_legend=1 00:36:49.635 --rc geninfo_all_blocks=1 00:36:49.635 --rc geninfo_unexecuted_blocks=1 00:36:49.635 00:36:49.635 ' 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:49.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:49.635 --rc genhtml_branch_coverage=1 00:36:49.635 --rc genhtml_function_coverage=1 00:36:49.635 --rc genhtml_legend=1 00:36:49.635 --rc geninfo_all_blocks=1 00:36:49.635 --rc geninfo_unexecuted_blocks=1 00:36:49.635 00:36:49.635 ' 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:49.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:49.635 --rc genhtml_branch_coverage=1 00:36:49.635 --rc genhtml_function_coverage=1 00:36:49.635 --rc genhtml_legend=1 00:36:49.635 --rc geninfo_all_blocks=1 00:36:49.635 --rc geninfo_unexecuted_blocks=1 00:36:49.635 00:36:49.635 ' 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:49.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:49.635 --rc genhtml_branch_coverage=1 00:36:49.635 --rc genhtml_function_coverage=1 00:36:49.635 --rc genhtml_legend=1 00:36:49.635 --rc geninfo_all_blocks=1 00:36:49.635 --rc geninfo_unexecuted_blocks=1 00:36:49.635 00:36:49.635 ' 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:36:49.635 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:54.910 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:54.910 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:54.910 Found net devices under 0000:af:00.0: cvl_0_0 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:54.910 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:54.911 Found net devices under 0000:af:00.1: cvl_0_1 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:54.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:54.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:36:54.911 00:36:54.911 --- 10.0.0.2 ping statistics --- 00:36:54.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:54.911 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:54.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:54.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:36:54.911 00:36:54.911 --- 10.0.0.1 ping statistics --- 00:36:54.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:54.911 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=2295315 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 2295315 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2295315 ']' 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:54.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:54.911 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:54.911 [2024-10-06 11:31:52.352263] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:54.911 [2024-10-06 11:31:52.353149] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:36:54.911 [2024-10-06 11:31:52.353185] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:54.911 [2024-10-06 11:31:52.414268] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:54.911 [2024-10-06 11:31:52.453500] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:54.911 [2024-10-06 11:31:52.453543] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:54.911 [2024-10-06 11:31:52.453550] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:54.911 [2024-10-06 11:31:52.453556] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:54.911 [2024-10-06 11:31:52.453561] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:54.911 [2024-10-06 11:31:52.455098] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:36:54.911 [2024-10-06 11:31:52.455118] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:36:54.911 [2024-10-06 11:31:52.455219] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:36:54.911 [2024-10-06 11:31:52.455220] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:55.171 [2024-10-06 11:31:52.529725] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:55.171 [2024-10-06 11:31:52.529904] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:55.171 [2024-10-06 11:31:52.530399] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:55.171 [2024-10-06 11:31:52.530428] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:55.171 [2024-10-06 11:31:52.530724] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.171 [2024-10-06 11:31:52.591948] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.171 Malloc0 00:36:55.171 [2024-10-06 11:31:52.659971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:36:55.171 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2295552 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2295552 /var/tmp/bdevperf.sock 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2295552 ']' 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:55.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:55.172 { 00:36:55.172 "params": { 00:36:55.172 "name": "Nvme$subsystem", 00:36:55.172 "trtype": "$TEST_TRANSPORT", 00:36:55.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:55.172 "adrfam": "ipv4", 00:36:55.172 "trsvcid": "$NVMF_PORT", 00:36:55.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:55.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:55.172 "hdgst": ${hdgst:-false}, 00:36:55.172 "ddgst": ${ddgst:-false} 00:36:55.172 }, 00:36:55.172 "method": "bdev_nvme_attach_controller" 00:36:55.172 } 00:36:55.172 EOF 00:36:55.172 )") 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:36:55.172 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:55.172 "params": { 00:36:55.172 "name": "Nvme0", 00:36:55.172 "trtype": "tcp", 00:36:55.172 "traddr": "10.0.0.2", 00:36:55.172 "adrfam": "ipv4", 00:36:55.172 "trsvcid": "4420", 00:36:55.172 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:55.172 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:55.172 "hdgst": false, 00:36:55.172 "ddgst": false 00:36:55.172 }, 00:36:55.172 "method": "bdev_nvme_attach_controller" 00:36:55.172 }' 00:36:55.432 [2024-10-06 11:31:52.753241] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:36:55.432 [2024-10-06 11:31:52.753288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2295552 ] 00:36:55.432 [2024-10-06 11:31:52.810036] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:55.432 [2024-10-06 11:31:52.848664] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:55.432 Running I/O for 10 seconds... 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:36:55.692 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:36:55.955 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:36:55.955 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:36:55.955 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:36:55.955 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:36:55.955 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.955 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.955 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.955 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=591 00:36:55.955 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 591 -ge 100 ']' 00:36:55.955 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:36:55.955 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:36:55.955 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:36:55.955 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:36:55.956 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.956 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.956 [2024-10-06 11:31:53.391697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3c2b0 is same with the state(6) to be set 00:36:55.956 [2024-10-06 11:31:53.391739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3c2b0 is same with the state(6) to be set 00:36:55.956 [2024-10-06 11:31:53.391747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3c2b0 is same with the state(6) to be set 00:36:55.956 [2024-10-06 11:31:53.391754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3c2b0 is same with the state(6) to be set 00:36:55.956 [2024-10-06 11:31:53.391760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3c2b0 is same with the state(6) to be set 00:36:55.956 [2024-10-06 11:31:53.391766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3c2b0 is same with the state(6) to be set 00:36:55.956 [2024-10-06 11:31:53.391772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3c2b0 is same with the state(6) to be set 00:36:55.956 [2024-10-06 11:31:53.391778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3c2b0 is same with the state(6) to be set 00:36:55.956 [2024-10-06 11:31:53.391784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3c2b0 is same with the state(6) to be set 00:36:55.956 [2024-10-06 11:31:53.391790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3c2b0 is same with the state(6) to be set 00:36:55.956 [2024-10-06 11:31:53.391796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3c2b0 is same with the state(6) to be set 00:36:55.956 [2024-10-06 11:31:53.391802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3c2b0 is same with the state(6) to be set 00:36:55.956 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.956 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:36:55.956 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.956 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.956 [2024-10-06 11:31:53.402681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:55.956 [2024-10-06 11:31:53.402715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.402725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:55.956 [2024-10-06 11:31:53.402732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.402740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:55.956 [2024-10-06 11:31:53.402746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.402753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:55.956 [2024-10-06 11:31:53.402760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.402767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1690fc0 is same with the state(6) to be set 00:36:55.956 [2024-10-06 11:31:53.403705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.403730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.403743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.403752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.403760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.403767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.403775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.403782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.403790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.403796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.403804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.403811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.403820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.403827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.403835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.403841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.403849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.403855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.403864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.403870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.403879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.403885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.403893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.403899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.403908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.403915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.403925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.403932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.403939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.403946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.403954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.403960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.403970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.403977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.403985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.403992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.403999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.404006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.404015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.404022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.404030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.404037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.404045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.404051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.404063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.404070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.956 [2024-10-06 11:31:53.404078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.956 [2024-10-06 11:31:53.404085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.957 [2024-10-06 11:31:53.404638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.957 [2024-10-06 11:31:53.404645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.958 [2024-10-06 11:31:53.404653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.958 [2024-10-06 11:31:53.404660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.958 [2024-10-06 11:31:53.404670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.958 [2024-10-06 11:31:53.404676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:55.958 [2024-10-06 11:31:53.404739] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x168d480 was disconnected and freed. reset controller. 00:36:55.958 [2024-10-06 11:31:53.405612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:55.958 task offset: 90112 on job bdev=Nvme0n1 fails 00:36:55.958 00:36:55.958 Latency(us) 00:36:55.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.958 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:55.958 Job: Nvme0n1 ended in about 0.41 seconds with error 00:36:55.958 Verification LBA range: start 0x0 length 0x400 00:36:55.958 Nvme0n1 : 0.41 1726.12 107.88 156.92 0.00 33124.91 1341.93 27088.21 00:36:55.958 =================================================================================================================== 00:36:55.958 Total : 1726.12 107.88 156.92 0.00 33124.91 1341.93 27088.21 00:36:55.958 [2024-10-06 11:31:53.407928] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:55.958 [2024-10-06 11:31:53.407947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1690fc0 (9): Bad file descriptor 00:36:55.958 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.958 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:36:55.958 [2024-10-06 11:31:53.411440] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:56.897 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2295552 00:36:56.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2295552) - No such process 00:36:56.897 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:36:56.897 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:36:56.897 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:36:56.897 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:36:56.897 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:36:56.897 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:36:56.897 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:56.897 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:56.897 { 00:36:56.897 "params": { 00:36:56.897 "name": "Nvme$subsystem", 00:36:56.897 "trtype": "$TEST_TRANSPORT", 00:36:56.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:56.897 "adrfam": "ipv4", 00:36:56.897 "trsvcid": "$NVMF_PORT", 00:36:56.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:56.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:56.897 "hdgst": ${hdgst:-false}, 00:36:56.897 "ddgst": ${ddgst:-false} 00:36:56.897 }, 00:36:56.897 "method": "bdev_nvme_attach_controller" 00:36:56.897 } 00:36:56.897 EOF 00:36:56.897 )") 00:36:56.897 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:36:56.897 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:36:56.897 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:36:56.897 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:56.897 "params": { 00:36:56.897 "name": "Nvme0", 00:36:56.897 "trtype": "tcp", 00:36:56.897 "traddr": "10.0.0.2", 00:36:56.897 "adrfam": "ipv4", 00:36:56.897 "trsvcid": "4420", 00:36:56.897 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:56.897 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:56.897 "hdgst": false, 00:36:56.897 "ddgst": false 00:36:56.897 }, 00:36:56.897 "method": "bdev_nvme_attach_controller" 00:36:56.897 }' 00:36:56.897 [2024-10-06 11:31:54.462998] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:36:56.897 [2024-10-06 11:31:54.463048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2295799 ] 00:36:57.156 [2024-10-06 11:31:54.518508] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:57.156 [2024-10-06 11:31:54.555281] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:57.156 Running I/O for 1 seconds... 00:36:58.535 1739.00 IOPS, 108.69 MiB/s 00:36:58.535 Latency(us) 00:36:58.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:58.535 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:58.535 Verification LBA range: start 0x0 length 0x400 00:36:58.535 Nvme0n1 : 1.01 1791.10 111.94 0.00 0.00 35103.30 1693.01 30084.14 00:36:58.535 =================================================================================================================== 00:36:58.535 Total : 1791.10 111.94 0.00 0.00 35103.30 1693.01 30084.14 00:36:58.535 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:58.536 rmmod nvme_tcp 00:36:58.536 rmmod nvme_fabrics 00:36:58.536 rmmod nvme_keyring 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 2295315 ']' 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 2295315 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2295315 ']' 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2295315 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:58.536 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2295315 00:36:58.536 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:58.536 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:58.536 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2295315' 00:36:58.536 killing process with pid 2295315 00:36:58.536 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2295315 00:36:58.536 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2295315 00:36:58.795 [2024-10-06 11:31:56.192416] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:36:58.795 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:58.795 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:58.795 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:58.795 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:36:58.795 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:58.795 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:36:58.795 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:36:58.795 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:58.795 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:58.795 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:58.795 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:58.795 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.331 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:01.331 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:01.331 00:37:01.331 real 0m11.509s 00:37:01.331 user 0m16.793s 00:37:01.331 sys 0m5.783s 00:37:01.331 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:01.331 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:01.331 ************************************ 00:37:01.331 END TEST nvmf_host_management 00:37:01.331 ************************************ 00:37:01.331 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:01.331 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:01.331 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:01.331 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:01.331 ************************************ 00:37:01.331 START TEST nvmf_lvol 00:37:01.331 ************************************ 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:01.332 * Looking for test storage... 00:37:01.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:01.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.332 --rc genhtml_branch_coverage=1 00:37:01.332 --rc genhtml_function_coverage=1 00:37:01.332 --rc genhtml_legend=1 00:37:01.332 --rc geninfo_all_blocks=1 00:37:01.332 --rc geninfo_unexecuted_blocks=1 00:37:01.332 00:37:01.332 ' 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:01.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.332 --rc genhtml_branch_coverage=1 00:37:01.332 --rc genhtml_function_coverage=1 00:37:01.332 --rc genhtml_legend=1 00:37:01.332 --rc geninfo_all_blocks=1 00:37:01.332 --rc geninfo_unexecuted_blocks=1 00:37:01.332 00:37:01.332 ' 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:01.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.332 --rc genhtml_branch_coverage=1 00:37:01.332 --rc genhtml_function_coverage=1 00:37:01.332 --rc genhtml_legend=1 00:37:01.332 --rc geninfo_all_blocks=1 00:37:01.332 --rc geninfo_unexecuted_blocks=1 00:37:01.332 00:37:01.332 ' 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:01.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.332 --rc genhtml_branch_coverage=1 00:37:01.332 --rc genhtml_function_coverage=1 00:37:01.332 --rc genhtml_legend=1 00:37:01.332 --rc geninfo_all_blocks=1 00:37:01.332 --rc geninfo_unexecuted_blocks=1 00:37:01.332 00:37:01.332 ' 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:01.332 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:01.333 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:06.613 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:06.613 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:06.613 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:06.614 Found net devices under 0000:af:00.0: cvl_0_0 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:06.614 Found net devices under 0000:af:00.1: cvl_0_1 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:06.614 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:06.614 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:06.614 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:06.614 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:06.614 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:06.614 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:06.614 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:06.614 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:06.614 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:06.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:06.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:37:06.614 00:37:06.614 --- 10.0.0.2 ping statistics --- 00:37:06.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:06.614 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:37:06.614 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:06.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:06.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:37:06.614 00:37:06.614 --- 10.0.0.1 ping statistics --- 00:37:06.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:06.614 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:37:06.614 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:06.614 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:37:06.614 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:06.614 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:06.614 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:06.614 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:06.614 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:06.614 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:06.614 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:06.875 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:06.875 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:06.875 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:06.875 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:06.875 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=2299480 00:37:06.875 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:06.875 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 2299480 00:37:06.875 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2299480 ']' 00:37:06.875 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:06.875 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:06.875 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:06.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:06.875 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:06.875 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:06.875 [2024-10-06 11:32:04.246301] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:06.875 [2024-10-06 11:32:04.247190] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:37:06.875 [2024-10-06 11:32:04.247221] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:06.875 [2024-10-06 11:32:04.304054] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:06.875 [2024-10-06 11:32:04.343585] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:06.875 [2024-10-06 11:32:04.343627] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:06.875 [2024-10-06 11:32:04.343635] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:06.875 [2024-10-06 11:32:04.343641] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:06.875 [2024-10-06 11:32:04.343646] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:06.875 [2024-10-06 11:32:04.348077] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:06.875 [2024-10-06 11:32:04.348096] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:37:06.875 [2024-10-06 11:32:04.348098] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:06.875 [2024-10-06 11:32:04.418304] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:06.875 [2024-10-06 11:32:04.418379] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:06.875 [2024-10-06 11:32:04.418414] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:06.875 [2024-10-06 11:32:04.418567] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:06.875 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:06.875 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:37:06.875 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:06.875 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:06.875 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:07.135 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:07.135 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:07.135 [2024-10-06 11:32:04.636620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:07.135 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:07.394 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:07.394 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:07.653 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:07.653 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:07.913 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:07.913 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3f9ef315-4059-483c-8565-0446d2cbcdb4 00:37:07.913 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3f9ef315-4059-483c-8565-0446d2cbcdb4 lvol 20 00:37:08.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=dff16d2f-66b6-469e-b52a-11ad4030f52e 00:37:08.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:08.433 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dff16d2f-66b6-469e-b52a-11ad4030f52e 00:37:08.693 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:08.693 [2024-10-06 11:32:06.180765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:08.693 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:08.952 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2299745 00:37:08.952 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:08.952 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:09.890 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot dff16d2f-66b6-469e-b52a-11ad4030f52e MY_SNAPSHOT 00:37:10.149 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=acabb5dd-5722-4674-b8e3-3f43b44bba27 00:37:10.149 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize dff16d2f-66b6-469e-b52a-11ad4030f52e 30 00:37:10.408 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone acabb5dd-5722-4674-b8e3-3f43b44bba27 MY_CLONE 00:37:10.666 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2da88c09-f8e0-4c56-ad6f-43cebd5385e6 00:37:10.666 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 2da88c09-f8e0-4c56-ad6f-43cebd5385e6 00:37:11.235 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2299745 00:37:19.355 Initializing NVMe Controllers 00:37:19.355 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:19.355 Controller IO queue size 128, less than required. 00:37:19.355 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:19.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:19.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:19.355 Initialization complete. Launching workers. 00:37:19.355 ======================================================== 00:37:19.355 Latency(us) 00:37:19.355 Device Information : IOPS MiB/s Average min max 00:37:19.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12628.87 49.33 10140.11 1531.62 51589.58 00:37:19.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12509.67 48.87 10234.66 2091.85 51202.60 00:37:19.355 ======================================================== 00:37:19.355 Total : 25138.54 98.20 10187.16 1531.62 51589.58 00:37:19.355 00:37:19.355 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:19.355 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dff16d2f-66b6-469e-b52a-11ad4030f52e 00:37:19.614 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3f9ef315-4059-483c-8565-0446d2cbcdb4 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:19.873 rmmod nvme_tcp 00:37:19.873 rmmod nvme_fabrics 00:37:19.873 rmmod nvme_keyring 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 2299480 ']' 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 2299480 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2299480 ']' 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2299480 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2299480 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2299480' 00:37:19.873 killing process with pid 2299480 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2299480 00:37:19.873 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2299480 00:37:20.133 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:20.133 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:20.133 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:20.133 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:20.133 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:37:20.133 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:20.133 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:37:20.133 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:20.133 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:20.133 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:20.133 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:20.133 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.672 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:22.672 00:37:22.672 real 0m21.323s 00:37:22.672 user 0m54.955s 00:37:22.672 sys 0m9.649s 00:37:22.672 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:22.672 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:22.672 ************************************ 00:37:22.672 END TEST nvmf_lvol 00:37:22.672 ************************************ 00:37:22.672 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:22.672 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:22.672 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:22.672 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:22.672 ************************************ 00:37:22.672 START TEST nvmf_lvs_grow 00:37:22.672 ************************************ 00:37:22.672 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:22.672 * Looking for test storage... 00:37:22.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:22.672 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:22.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.673 --rc genhtml_branch_coverage=1 00:37:22.673 --rc genhtml_function_coverage=1 00:37:22.673 --rc genhtml_legend=1 00:37:22.673 --rc geninfo_all_blocks=1 00:37:22.673 --rc geninfo_unexecuted_blocks=1 00:37:22.673 00:37:22.673 ' 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:22.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.673 --rc genhtml_branch_coverage=1 00:37:22.673 --rc genhtml_function_coverage=1 00:37:22.673 --rc genhtml_legend=1 00:37:22.673 --rc geninfo_all_blocks=1 00:37:22.673 --rc geninfo_unexecuted_blocks=1 00:37:22.673 00:37:22.673 ' 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:22.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.673 --rc genhtml_branch_coverage=1 00:37:22.673 --rc genhtml_function_coverage=1 00:37:22.673 --rc genhtml_legend=1 00:37:22.673 --rc geninfo_all_blocks=1 00:37:22.673 --rc geninfo_unexecuted_blocks=1 00:37:22.673 00:37:22.673 ' 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:22.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.673 --rc genhtml_branch_coverage=1 00:37:22.673 --rc genhtml_function_coverage=1 00:37:22.673 --rc genhtml_legend=1 00:37:22.673 --rc geninfo_all_blocks=1 00:37:22.673 --rc geninfo_unexecuted_blocks=1 00:37:22.673 00:37:22.673 ' 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:22.673 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:22.674 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:27.951 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:27.951 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:27.951 Found net devices under 0000:af:00.0: cvl_0_0 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:27.951 Found net devices under 0000:af:00.1: cvl_0_1 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:27.951 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:27.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:27.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:37:27.952 00:37:27.952 --- 10.0.0.2 ping statistics --- 00:37:27.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:27.952 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:27.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:27.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:37:27.952 00:37:27.952 --- 10.0.0.1 ping statistics --- 00:37:27.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:27.952 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=2304887 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 2304887 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2304887 ']' 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:27.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:27.952 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:27.952 [2024-10-06 11:32:25.497483] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:27.952 [2024-10-06 11:32:25.498410] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:37:27.952 [2024-10-06 11:32:25.498446] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:28.212 [2024-10-06 11:32:25.556476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.212 [2024-10-06 11:32:25.595366] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:28.212 [2024-10-06 11:32:25.595408] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:28.212 [2024-10-06 11:32:25.595415] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:28.212 [2024-10-06 11:32:25.595421] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:28.212 [2024-10-06 11:32:25.595426] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:28.212 [2024-10-06 11:32:25.595953] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:28.212 [2024-10-06 11:32:25.657400] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:28.212 [2024-10-06 11:32:25.657612] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:28.212 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:28.212 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:37:28.212 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:28.212 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:28.212 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:28.212 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:28.212 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:28.473 [2024-10-06 11:32:25.888426] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:28.473 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:28.473 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:28.473 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:28.473 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:28.473 ************************************ 00:37:28.473 START TEST lvs_grow_clean 00:37:28.473 ************************************ 00:37:28.473 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:37:28.473 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:28.473 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:28.473 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:28.473 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:28.473 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:28.473 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:28.473 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:28.473 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:28.473 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:28.733 11:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:28.733 11:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:28.992 11:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=dcb958ec-26d5-4318-8eb6-1642552cbd19 00:37:28.992 11:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dcb958ec-26d5-4318-8eb6-1642552cbd19 00:37:28.992 11:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:28.992 11:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:28.992 11:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:28.992 11:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dcb958ec-26d5-4318-8eb6-1642552cbd19 lvol 150 00:37:29.252 11:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=334c3c48-4104-44e5-9717-1be910f6332f 00:37:29.252 11:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:29.252 11:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:29.511 [2024-10-06 11:32:26.924297] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:29.511 [2024-10-06 11:32:26.924376] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:29.511 true 00:37:29.511 11:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dcb958ec-26d5-4318-8eb6-1642552cbd19 00:37:29.511 11:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:29.770 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:29.770 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:29.770 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 334c3c48-4104-44e5-9717-1be910f6332f 00:37:30.029 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:30.288 [2024-10-06 11:32:27.668605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:30.288 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:30.288 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2305252 00:37:30.288 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:30.288 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:30.288 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2305252 /var/tmp/bdevperf.sock 00:37:30.288 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2305252 ']' 00:37:30.288 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:30.288 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:30.288 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:30.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:30.288 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:30.288 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:30.548 [2024-10-06 11:32:27.903154] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:37:30.548 [2024-10-06 11:32:27.903205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2305252 ] 00:37:30.548 [2024-10-06 11:32:27.958644] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.548 [2024-10-06 11:32:27.998457] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:30.548 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:30.548 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:37:30.548 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:31.116 Nvme0n1 00:37:31.116 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:31.116 [ 00:37:31.116 { 00:37:31.116 "name": "Nvme0n1", 00:37:31.116 "aliases": [ 00:37:31.116 "334c3c48-4104-44e5-9717-1be910f6332f" 00:37:31.116 ], 00:37:31.116 "product_name": "NVMe disk", 00:37:31.116 "block_size": 4096, 00:37:31.116 "num_blocks": 38912, 00:37:31.116 "uuid": "334c3c48-4104-44e5-9717-1be910f6332f", 00:37:31.116 "numa_id": 1, 00:37:31.116 "assigned_rate_limits": { 00:37:31.116 "rw_ios_per_sec": 0, 00:37:31.116 "rw_mbytes_per_sec": 0, 00:37:31.116 "r_mbytes_per_sec": 0, 00:37:31.116 "w_mbytes_per_sec": 0 00:37:31.116 }, 00:37:31.116 "claimed": false, 00:37:31.116 "zoned": false, 00:37:31.116 "supported_io_types": { 00:37:31.116 "read": true, 00:37:31.116 "write": true, 00:37:31.116 "unmap": true, 00:37:31.116 "flush": true, 00:37:31.116 "reset": true, 00:37:31.116 "nvme_admin": true, 00:37:31.116 "nvme_io": true, 00:37:31.116 "nvme_io_md": false, 00:37:31.116 "write_zeroes": true, 00:37:31.116 "zcopy": false, 00:37:31.116 "get_zone_info": false, 00:37:31.116 "zone_management": false, 00:37:31.116 "zone_append": false, 00:37:31.116 "compare": true, 00:37:31.116 "compare_and_write": true, 00:37:31.116 "abort": true, 00:37:31.116 "seek_hole": false, 00:37:31.116 "seek_data": false, 00:37:31.116 "copy": true, 00:37:31.116 "nvme_iov_md": false 00:37:31.116 }, 00:37:31.116 "memory_domains": [ 00:37:31.116 { 00:37:31.116 "dma_device_id": "system", 00:37:31.116 "dma_device_type": 1 00:37:31.116 } 00:37:31.116 ], 00:37:31.116 "driver_specific": { 00:37:31.116 "nvme": [ 00:37:31.116 { 00:37:31.116 "trid": { 00:37:31.116 "trtype": "TCP", 00:37:31.116 "adrfam": "IPv4", 00:37:31.116 "traddr": "10.0.0.2", 00:37:31.116 "trsvcid": "4420", 00:37:31.116 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:31.116 }, 00:37:31.116 "ctrlr_data": { 00:37:31.116 "cntlid": 1, 00:37:31.116 "vendor_id": "0x8086", 00:37:31.116 "model_number": "SPDK bdev Controller", 00:37:31.116 "serial_number": "SPDK0", 00:37:31.116 "firmware_revision": "25.01", 00:37:31.116 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:31.116 "oacs": { 00:37:31.116 "security": 0, 00:37:31.116 "format": 0, 00:37:31.116 "firmware": 0, 00:37:31.116 "ns_manage": 0 00:37:31.116 }, 00:37:31.116 "multi_ctrlr": true, 00:37:31.116 "ana_reporting": false 00:37:31.116 }, 00:37:31.116 "vs": { 00:37:31.116 "nvme_version": "1.3" 00:37:31.116 }, 00:37:31.116 "ns_data": { 00:37:31.116 "id": 1, 00:37:31.116 "can_share": true 00:37:31.116 } 00:37:31.116 } 00:37:31.116 ], 00:37:31.116 "mp_policy": "active_passive" 00:37:31.116 } 00:37:31.116 } 00:37:31.116 ] 00:37:31.116 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2305475 00:37:31.116 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:31.116 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:31.374 Running I/O for 10 seconds... 00:37:32.310 Latency(us) 00:37:32.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:32.310 Nvme0n1 : 1.00 21806.00 85.18 0.00 0.00 0.00 0.00 0.00 00:37:32.310 =================================================================================================================== 00:37:32.311 Total : 21806.00 85.18 0.00 0.00 0.00 0.00 0.00 00:37:32.311 00:37:33.334 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dcb958ec-26d5-4318-8eb6-1642552cbd19 00:37:33.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:33.334 Nvme0n1 : 2.00 21719.00 84.84 0.00 0.00 0.00 0.00 0.00 00:37:33.334 =================================================================================================================== 00:37:33.334 Total : 21719.00 84.84 0.00 0.00 0.00 0.00 0.00 00:37:33.334 00:37:33.334 true 00:37:33.334 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dcb958ec-26d5-4318-8eb6-1642552cbd19 00:37:33.334 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:33.593 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:33.593 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:33.593 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2305475 00:37:34.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:34.162 Nvme0n1 : 3.00 21823.33 85.25 0.00 0.00 0.00 0.00 0.00 00:37:34.162 =================================================================================================================== 00:37:34.162 Total : 21823.33 85.25 0.00 0.00 0.00 0.00 0.00 00:37:34.162 00:37:35.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:35.541 Nvme0n1 : 4.00 21915.50 85.61 0.00 0.00 0.00 0.00 0.00 00:37:35.541 =================================================================================================================== 00:37:35.541 Total : 21915.50 85.61 0.00 0.00 0.00 0.00 0.00 00:37:35.541 00:37:36.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:36.477 Nvme0n1 : 5.00 22002.80 85.95 0.00 0.00 0.00 0.00 0.00 00:37:36.477 =================================================================================================================== 00:37:36.477 Total : 22002.80 85.95 0.00 0.00 0.00 0.00 0.00 00:37:36.477 00:37:37.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:37.414 Nvme0n1 : 6.00 22065.00 86.19 0.00 0.00 0.00 0.00 0.00 00:37:37.414 =================================================================================================================== 00:37:37.414 Total : 22065.00 86.19 0.00 0.00 0.00 0.00 0.00 00:37:37.414 00:37:38.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:38.352 Nvme0n1 : 7.00 22112.86 86.38 0.00 0.00 0.00 0.00 0.00 00:37:38.352 =================================================================================================================== 00:37:38.352 Total : 22112.86 86.38 0.00 0.00 0.00 0.00 0.00 00:37:38.352 00:37:39.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:39.288 Nvme0n1 : 8.00 22153.75 86.54 0.00 0.00 0.00 0.00 0.00 00:37:39.288 =================================================================================================================== 00:37:39.288 Total : 22153.75 86.54 0.00 0.00 0.00 0.00 0.00 00:37:39.288 00:37:40.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:40.224 Nvme0n1 : 9.00 22185.56 86.66 0.00 0.00 0.00 0.00 0.00 00:37:40.224 =================================================================================================================== 00:37:40.224 Total : 22185.56 86.66 0.00 0.00 0.00 0.00 0.00 00:37:40.224 00:37:41.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:41.601 Nvme0n1 : 10.00 22211.80 86.76 0.00 0.00 0.00 0.00 0.00 00:37:41.602 =================================================================================================================== 00:37:41.602 Total : 22211.80 86.76 0.00 0.00 0.00 0.00 0.00 00:37:41.602 00:37:41.602 00:37:41.602 Latency(us) 00:37:41.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:41.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:41.602 Nvme0n1 : 10.01 22211.90 86.77 0.00 0.00 5758.61 1622.80 10985.08 00:37:41.602 =================================================================================================================== 00:37:41.602 Total : 22211.90 86.77 0.00 0.00 5758.61 1622.80 10985.08 00:37:41.602 { 00:37:41.602 "results": [ 00:37:41.602 { 00:37:41.602 "job": "Nvme0n1", 00:37:41.602 "core_mask": "0x2", 00:37:41.602 "workload": "randwrite", 00:37:41.602 "status": "finished", 00:37:41.602 "queue_depth": 128, 00:37:41.602 "io_size": 4096, 00:37:41.602 "runtime": 10.005359, 00:37:41.602 "iops": 22211.896644588167, 00:37:41.602 "mibps": 86.76522126792253, 00:37:41.602 "io_failed": 0, 00:37:41.602 "io_timeout": 0, 00:37:41.602 "avg_latency_us": 5758.605606293381, 00:37:41.602 "min_latency_us": 1622.7961904761905, 00:37:41.602 "max_latency_us": 10985.081904761904 00:37:41.602 } 00:37:41.602 ], 00:37:41.602 "core_count": 1 00:37:41.602 } 00:37:41.602 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2305252 00:37:41.602 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2305252 ']' 00:37:41.602 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2305252 00:37:41.602 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:37:41.602 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:41.602 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2305252 00:37:41.602 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:41.602 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:41.602 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2305252' 00:37:41.602 killing process with pid 2305252 00:37:41.602 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2305252 00:37:41.602 Received shutdown signal, test time was about 10.000000 seconds 00:37:41.602 00:37:41.602 Latency(us) 00:37:41.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:41.602 =================================================================================================================== 00:37:41.602 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:41.602 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2305252 00:37:41.602 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:41.861 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:41.861 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dcb958ec-26d5-4318-8eb6-1642552cbd19 00:37:41.861 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:42.120 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:42.120 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:37:42.120 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:42.380 [2024-10-06 11:32:39.748334] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:42.380 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dcb958ec-26d5-4318-8eb6-1642552cbd19 00:37:42.380 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:37:42.380 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dcb958ec-26d5-4318-8eb6-1642552cbd19 00:37:42.380 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:42.380 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:42.380 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:42.380 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:42.380 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:42.380 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:42.380 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:42.380 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:42.380 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dcb958ec-26d5-4318-8eb6-1642552cbd19 00:37:42.639 request: 00:37:42.640 { 00:37:42.640 "uuid": "dcb958ec-26d5-4318-8eb6-1642552cbd19", 00:37:42.640 "method": "bdev_lvol_get_lvstores", 00:37:42.640 "req_id": 1 00:37:42.640 } 00:37:42.640 Got JSON-RPC error response 00:37:42.640 response: 00:37:42.640 { 00:37:42.640 "code": -19, 00:37:42.640 "message": "No such device" 00:37:42.640 } 00:37:42.640 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:37:42.640 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:42.640 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:42.640 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:42.640 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:42.640 aio_bdev 00:37:42.640 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 334c3c48-4104-44e5-9717-1be910f6332f 00:37:42.640 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=334c3c48-4104-44e5-9717-1be910f6332f 00:37:42.640 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:42.640 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:37:42.640 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:42.640 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:42.640 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:42.899 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 334c3c48-4104-44e5-9717-1be910f6332f -t 2000 00:37:43.158 [ 00:37:43.158 { 00:37:43.158 "name": "334c3c48-4104-44e5-9717-1be910f6332f", 00:37:43.158 "aliases": [ 00:37:43.158 "lvs/lvol" 00:37:43.158 ], 00:37:43.158 "product_name": "Logical Volume", 00:37:43.158 "block_size": 4096, 00:37:43.158 "num_blocks": 38912, 00:37:43.158 "uuid": "334c3c48-4104-44e5-9717-1be910f6332f", 00:37:43.158 "assigned_rate_limits": { 00:37:43.158 "rw_ios_per_sec": 0, 00:37:43.158 "rw_mbytes_per_sec": 0, 00:37:43.158 "r_mbytes_per_sec": 0, 00:37:43.158 "w_mbytes_per_sec": 0 00:37:43.158 }, 00:37:43.158 "claimed": false, 00:37:43.158 "zoned": false, 00:37:43.158 "supported_io_types": { 00:37:43.158 "read": true, 00:37:43.158 "write": true, 00:37:43.158 "unmap": true, 00:37:43.158 "flush": false, 00:37:43.158 "reset": true, 00:37:43.158 "nvme_admin": false, 00:37:43.158 "nvme_io": false, 00:37:43.158 "nvme_io_md": false, 00:37:43.158 "write_zeroes": true, 00:37:43.158 "zcopy": false, 00:37:43.158 "get_zone_info": false, 00:37:43.158 "zone_management": false, 00:37:43.158 "zone_append": false, 00:37:43.158 "compare": false, 00:37:43.158 "compare_and_write": false, 00:37:43.158 "abort": false, 00:37:43.158 "seek_hole": true, 00:37:43.158 "seek_data": true, 00:37:43.158 "copy": false, 00:37:43.158 "nvme_iov_md": false 00:37:43.158 }, 00:37:43.158 "driver_specific": { 00:37:43.158 "lvol": { 00:37:43.158 "lvol_store_uuid": "dcb958ec-26d5-4318-8eb6-1642552cbd19", 00:37:43.158 "base_bdev": "aio_bdev", 00:37:43.158 "thin_provision": false, 00:37:43.158 "num_allocated_clusters": 38, 00:37:43.158 "snapshot": false, 00:37:43.158 "clone": false, 00:37:43.158 "esnap_clone": false 00:37:43.158 } 00:37:43.158 } 00:37:43.158 } 00:37:43.158 ] 00:37:43.158 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:37:43.158 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dcb958ec-26d5-4318-8eb6-1642552cbd19 00:37:43.158 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:43.418 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:43.418 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dcb958ec-26d5-4318-8eb6-1642552cbd19 00:37:43.418 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:43.418 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:43.418 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 334c3c48-4104-44e5-9717-1be910f6332f 00:37:43.677 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dcb958ec-26d5-4318-8eb6-1642552cbd19 00:37:43.936 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:44.196 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:44.196 00:37:44.196 real 0m15.608s 00:37:44.196 user 0m15.094s 00:37:44.196 sys 0m1.538s 00:37:44.196 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:44.196 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:44.196 ************************************ 00:37:44.196 END TEST lvs_grow_clean 00:37:44.196 ************************************ 00:37:44.196 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:37:44.196 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:44.196 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:44.196 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:44.196 ************************************ 00:37:44.196 START TEST lvs_grow_dirty 00:37:44.196 ************************************ 00:37:44.196 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:37:44.196 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:44.196 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:44.196 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:44.196 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:44.196 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:44.196 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:44.196 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:44.196 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:44.196 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:44.455 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:44.455 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:44.456 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1dad6072-671a-476a-849b-b7a797e00981 00:37:44.456 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dad6072-671a-476a-849b-b7a797e00981 00:37:44.456 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:44.715 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:44.715 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:44.715 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1dad6072-671a-476a-849b-b7a797e00981 lvol 150 00:37:44.984 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c7249d18-2560-479a-ab5b-57882d1f28e4 00:37:44.985 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:44.985 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:45.246 [2024-10-06 11:32:42.576359] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:45.246 [2024-10-06 11:32:42.576490] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:45.246 true 00:37:45.246 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dad6072-671a-476a-849b-b7a797e00981 00:37:45.246 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:45.246 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:45.246 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:45.505 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c7249d18-2560-479a-ab5b-57882d1f28e4 00:37:45.764 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:45.764 [2024-10-06 11:32:43.304572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:45.764 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:46.024 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2307772 00:37:46.024 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:46.024 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:46.024 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2307772 /var/tmp/bdevperf.sock 00:37:46.024 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2307772 ']' 00:37:46.024 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:46.024 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:46.024 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:46.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:46.024 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:46.024 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:46.024 [2024-10-06 11:32:43.543259] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:37:46.024 [2024-10-06 11:32:43.543308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2307772 ] 00:37:46.024 [2024-10-06 11:32:43.595757] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:46.283 [2024-10-06 11:32:43.635267] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:46.283 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:46.283 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:37:46.283 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:46.542 Nvme0n1 00:37:46.543 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:46.802 [ 00:37:46.802 { 00:37:46.802 "name": "Nvme0n1", 00:37:46.802 "aliases": [ 00:37:46.802 "c7249d18-2560-479a-ab5b-57882d1f28e4" 00:37:46.802 ], 00:37:46.802 "product_name": "NVMe disk", 00:37:46.802 "block_size": 4096, 00:37:46.802 "num_blocks": 38912, 00:37:46.802 "uuid": "c7249d18-2560-479a-ab5b-57882d1f28e4", 00:37:46.802 "numa_id": 1, 00:37:46.802 "assigned_rate_limits": { 00:37:46.802 "rw_ios_per_sec": 0, 00:37:46.802 "rw_mbytes_per_sec": 0, 00:37:46.802 "r_mbytes_per_sec": 0, 00:37:46.802 "w_mbytes_per_sec": 0 00:37:46.802 }, 00:37:46.802 "claimed": false, 00:37:46.802 "zoned": false, 00:37:46.802 "supported_io_types": { 00:37:46.802 "read": true, 00:37:46.802 "write": true, 00:37:46.802 "unmap": true, 00:37:46.802 "flush": true, 00:37:46.802 "reset": true, 00:37:46.802 "nvme_admin": true, 00:37:46.802 "nvme_io": true, 00:37:46.802 "nvme_io_md": false, 00:37:46.802 "write_zeroes": true, 00:37:46.802 "zcopy": false, 00:37:46.802 "get_zone_info": false, 00:37:46.802 "zone_management": false, 00:37:46.802 "zone_append": false, 00:37:46.802 "compare": true, 00:37:46.802 "compare_and_write": true, 00:37:46.802 "abort": true, 00:37:46.802 "seek_hole": false, 00:37:46.802 "seek_data": false, 00:37:46.802 "copy": true, 00:37:46.802 "nvme_iov_md": false 00:37:46.802 }, 00:37:46.802 "memory_domains": [ 00:37:46.802 { 00:37:46.802 "dma_device_id": "system", 00:37:46.802 "dma_device_type": 1 00:37:46.802 } 00:37:46.802 ], 00:37:46.802 "driver_specific": { 00:37:46.802 "nvme": [ 00:37:46.802 { 00:37:46.802 "trid": { 00:37:46.802 "trtype": "TCP", 00:37:46.802 "adrfam": "IPv4", 00:37:46.802 "traddr": "10.0.0.2", 00:37:46.802 "trsvcid": "4420", 00:37:46.802 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:46.802 }, 00:37:46.802 "ctrlr_data": { 00:37:46.802 "cntlid": 1, 00:37:46.802 "vendor_id": "0x8086", 00:37:46.802 "model_number": "SPDK bdev Controller", 00:37:46.802 "serial_number": "SPDK0", 00:37:46.802 "firmware_revision": "25.01", 00:37:46.802 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:46.802 "oacs": { 00:37:46.802 "security": 0, 00:37:46.802 "format": 0, 00:37:46.802 "firmware": 0, 00:37:46.802 "ns_manage": 0 00:37:46.802 }, 00:37:46.802 "multi_ctrlr": true, 00:37:46.802 "ana_reporting": false 00:37:46.802 }, 00:37:46.802 "vs": { 00:37:46.802 "nvme_version": "1.3" 00:37:46.802 }, 00:37:46.802 "ns_data": { 00:37:46.802 "id": 1, 00:37:46.802 "can_share": true 00:37:46.802 } 00:37:46.802 } 00:37:46.802 ], 00:37:46.802 "mp_policy": "active_passive" 00:37:46.802 } 00:37:46.802 } 00:37:46.802 ] 00:37:46.802 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2307888 00:37:46.802 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:46.802 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:46.802 Running I/O for 10 seconds... 00:37:47.738 Latency(us) 00:37:47.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:47.738 Nvme0n1 : 1.00 21902.00 85.55 0.00 0.00 0.00 0.00 0.00 00:37:47.738 =================================================================================================================== 00:37:47.738 Total : 21902.00 85.55 0.00 0.00 0.00 0.00 0.00 00:37:47.738 00:37:48.675 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1dad6072-671a-476a-849b-b7a797e00981 00:37:48.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:48.934 Nvme0n1 : 2.00 22027.00 86.04 0.00 0.00 0.00 0.00 0.00 00:37:48.934 =================================================================================================================== 00:37:48.934 Total : 22027.00 86.04 0.00 0.00 0.00 0.00 0.00 00:37:48.934 00:37:48.934 true 00:37:48.934 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dad6072-671a-476a-849b-b7a797e00981 00:37:48.934 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:49.193 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:49.193 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:49.193 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2307888 00:37:49.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:49.760 Nvme0n1 : 3.00 22079.33 86.25 0.00 0.00 0.00 0.00 0.00 00:37:49.760 =================================================================================================================== 00:37:49.760 Total : 22079.33 86.25 0.00 0.00 0.00 0.00 0.00 00:37:49.760 00:37:51.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:51.138 Nvme0n1 : 4.00 22157.50 86.55 0.00 0.00 0.00 0.00 0.00 00:37:51.138 =================================================================================================================== 00:37:51.138 Total : 22157.50 86.55 0.00 0.00 0.00 0.00 0.00 00:37:51.138 00:37:52.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:52.075 Nvme0n1 : 5.00 22199.60 86.72 0.00 0.00 0.00 0.00 0.00 00:37:52.075 =================================================================================================================== 00:37:52.075 Total : 22199.60 86.72 0.00 0.00 0.00 0.00 0.00 00:37:52.075 00:37:53.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:53.014 Nvme0n1 : 6.00 22194.33 86.70 0.00 0.00 0.00 0.00 0.00 00:37:53.014 =================================================================================================================== 00:37:53.014 Total : 22194.33 86.70 0.00 0.00 0.00 0.00 0.00 00:37:53.014 00:37:53.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:53.985 Nvme0n1 : 7.00 22186.00 86.66 0.00 0.00 0.00 0.00 0.00 00:37:53.985 =================================================================================================================== 00:37:53.985 Total : 22186.00 86.66 0.00 0.00 0.00 0.00 0.00 00:37:53.985 00:37:54.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:54.923 Nvme0n1 : 8.00 22201.75 86.73 0.00 0.00 0.00 0.00 0.00 00:37:54.923 =================================================================================================================== 00:37:54.923 Total : 22201.75 86.73 0.00 0.00 0.00 0.00 0.00 00:37:54.923 00:37:55.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:55.861 Nvme0n1 : 9.00 22230.00 86.84 0.00 0.00 0.00 0.00 0.00 00:37:55.861 =================================================================================================================== 00:37:55.861 Total : 22230.00 86.84 0.00 0.00 0.00 0.00 0.00 00:37:55.861 00:37:56.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:56.798 Nvme0n1 : 10.00 22252.60 86.92 0.00 0.00 0.00 0.00 0.00 00:37:56.798 =================================================================================================================== 00:37:56.798 Total : 22252.60 86.92 0.00 0.00 0.00 0.00 0.00 00:37:56.798 00:37:56.798 00:37:56.798 Latency(us) 00:37:56.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:56.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:56.798 Nvme0n1 : 10.01 22252.43 86.92 0.00 0.00 5748.28 1568.18 10985.08 00:37:56.798 =================================================================================================================== 00:37:56.798 Total : 22252.43 86.92 0.00 0.00 5748.28 1568.18 10985.08 00:37:56.798 { 00:37:56.798 "results": [ 00:37:56.798 { 00:37:56.798 "job": "Nvme0n1", 00:37:56.798 "core_mask": "0x2", 00:37:56.798 "workload": "randwrite", 00:37:56.798 "status": "finished", 00:37:56.798 "queue_depth": 128, 00:37:56.798 "io_size": 4096, 00:37:56.798 "runtime": 10.00547, 00:37:56.798 "iops": 22252.427921926705, 00:37:56.798 "mibps": 86.92354657002619, 00:37:56.798 "io_failed": 0, 00:37:56.798 "io_timeout": 0, 00:37:56.798 "avg_latency_us": 5748.282523878392, 00:37:56.798 "min_latency_us": 1568.182857142857, 00:37:56.798 "max_latency_us": 10985.081904761904 00:37:56.798 } 00:37:56.798 ], 00:37:56.798 "core_count": 1 00:37:56.798 } 00:37:56.798 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2307772 00:37:56.798 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2307772 ']' 00:37:56.798 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2307772 00:37:56.798 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:37:56.798 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:56.798 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2307772 00:37:57.057 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:57.057 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:57.057 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2307772' 00:37:57.057 killing process with pid 2307772 00:37:57.057 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2307772 00:37:57.057 Received shutdown signal, test time was about 10.000000 seconds 00:37:57.057 00:37:57.057 Latency(us) 00:37:57.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:57.057 =================================================================================================================== 00:37:57.057 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:57.057 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2307772 00:37:57.058 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:57.317 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:57.577 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dad6072-671a-476a-849b-b7a797e00981 00:37:57.577 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:57.577 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:57.577 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:37:57.577 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2304887 00:37:57.577 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2304887 00:37:57.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2304887 Killed "${NVMF_APP[@]}" "$@" 00:37:57.837 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:37:57.837 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:37:57.837 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:57.837 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:57.837 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:57.837 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=2309560 00:37:57.837 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:57.837 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 2309560 00:37:57.837 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2309560 ']' 00:37:57.837 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:57.837 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:57.837 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:57.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:57.837 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:57.837 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:57.837 [2024-10-06 11:32:55.206742] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:57.837 [2024-10-06 11:32:55.207627] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:37:57.837 [2024-10-06 11:32:55.207662] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:57.837 [2024-10-06 11:32:55.266167] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:57.837 [2024-10-06 11:32:55.304782] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:57.837 [2024-10-06 11:32:55.304820] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:57.838 [2024-10-06 11:32:55.304827] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:57.838 [2024-10-06 11:32:55.304833] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:57.838 [2024-10-06 11:32:55.304838] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:57.838 [2024-10-06 11:32:55.305330] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:57.838 [2024-10-06 11:32:55.366410] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:57.838 [2024-10-06 11:32:55.366622] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:57.838 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:57.838 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:37:57.838 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:57.838 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:57.838 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:58.097 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:58.097 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:58.097 [2024-10-06 11:32:55.600313] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:37:58.097 [2024-10-06 11:32:55.600471] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:37:58.097 [2024-10-06 11:32:55.600512] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:37:58.097 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:37:58.097 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c7249d18-2560-479a-ab5b-57882d1f28e4 00:37:58.097 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=c7249d18-2560-479a-ab5b-57882d1f28e4 00:37:58.097 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:58.097 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:37:58.097 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:58.097 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:58.097 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:58.357 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c7249d18-2560-479a-ab5b-57882d1f28e4 -t 2000 00:37:58.617 [ 00:37:58.617 { 00:37:58.617 "name": "c7249d18-2560-479a-ab5b-57882d1f28e4", 00:37:58.617 "aliases": [ 00:37:58.617 "lvs/lvol" 00:37:58.617 ], 00:37:58.617 "product_name": "Logical Volume", 00:37:58.617 "block_size": 4096, 00:37:58.617 "num_blocks": 38912, 00:37:58.617 "uuid": "c7249d18-2560-479a-ab5b-57882d1f28e4", 00:37:58.617 "assigned_rate_limits": { 00:37:58.617 "rw_ios_per_sec": 0, 00:37:58.617 "rw_mbytes_per_sec": 0, 00:37:58.617 "r_mbytes_per_sec": 0, 00:37:58.617 "w_mbytes_per_sec": 0 00:37:58.617 }, 00:37:58.617 "claimed": false, 00:37:58.617 "zoned": false, 00:37:58.617 "supported_io_types": { 00:37:58.617 "read": true, 00:37:58.617 "write": true, 00:37:58.617 "unmap": true, 00:37:58.617 "flush": false, 00:37:58.617 "reset": true, 00:37:58.617 "nvme_admin": false, 00:37:58.617 "nvme_io": false, 00:37:58.617 "nvme_io_md": false, 00:37:58.617 "write_zeroes": true, 00:37:58.617 "zcopy": false, 00:37:58.617 "get_zone_info": false, 00:37:58.617 "zone_management": false, 00:37:58.617 "zone_append": false, 00:37:58.617 "compare": false, 00:37:58.617 "compare_and_write": false, 00:37:58.617 "abort": false, 00:37:58.617 "seek_hole": true, 00:37:58.617 "seek_data": true, 00:37:58.617 "copy": false, 00:37:58.617 "nvme_iov_md": false 00:37:58.617 }, 00:37:58.617 "driver_specific": { 00:37:58.617 "lvol": { 00:37:58.617 "lvol_store_uuid": "1dad6072-671a-476a-849b-b7a797e00981", 00:37:58.617 "base_bdev": "aio_bdev", 00:37:58.617 "thin_provision": false, 00:37:58.617 "num_allocated_clusters": 38, 00:37:58.617 "snapshot": false, 00:37:58.617 "clone": false, 00:37:58.617 "esnap_clone": false 00:37:58.617 } 00:37:58.617 } 00:37:58.617 } 00:37:58.617 ] 00:37:58.617 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:37:58.617 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dad6072-671a-476a-849b-b7a797e00981 00:37:58.617 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:37:58.876 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:37:58.876 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dad6072-671a-476a-849b-b7a797e00981 00:37:58.876 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:37:58.876 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:37:58.876 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:59.136 [2024-10-06 11:32:56.549822] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:59.136 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dad6072-671a-476a-849b-b7a797e00981 00:37:59.136 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:37:59.137 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dad6072-671a-476a-849b-b7a797e00981 00:37:59.137 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:59.137 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:59.137 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:59.137 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:59.137 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:59.137 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:59.137 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:59.137 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:59.137 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dad6072-671a-476a-849b-b7a797e00981 00:37:59.396 request: 00:37:59.396 { 00:37:59.396 "uuid": "1dad6072-671a-476a-849b-b7a797e00981", 00:37:59.396 "method": "bdev_lvol_get_lvstores", 00:37:59.396 "req_id": 1 00:37:59.396 } 00:37:59.396 Got JSON-RPC error response 00:37:59.396 response: 00:37:59.396 { 00:37:59.396 "code": -19, 00:37:59.396 "message": "No such device" 00:37:59.396 } 00:37:59.396 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:37:59.396 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:59.396 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:59.396 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:59.396 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:59.656 aio_bdev 00:37:59.656 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c7249d18-2560-479a-ab5b-57882d1f28e4 00:37:59.656 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=c7249d18-2560-479a-ab5b-57882d1f28e4 00:37:59.656 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:59.656 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:37:59.656 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:59.656 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:59.656 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:59.656 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c7249d18-2560-479a-ab5b-57882d1f28e4 -t 2000 00:37:59.916 [ 00:37:59.916 { 00:37:59.916 "name": "c7249d18-2560-479a-ab5b-57882d1f28e4", 00:37:59.916 "aliases": [ 00:37:59.916 "lvs/lvol" 00:37:59.916 ], 00:37:59.916 "product_name": "Logical Volume", 00:37:59.916 "block_size": 4096, 00:37:59.916 "num_blocks": 38912, 00:37:59.916 "uuid": "c7249d18-2560-479a-ab5b-57882d1f28e4", 00:37:59.916 "assigned_rate_limits": { 00:37:59.916 "rw_ios_per_sec": 0, 00:37:59.916 "rw_mbytes_per_sec": 0, 00:37:59.916 "r_mbytes_per_sec": 0, 00:37:59.916 "w_mbytes_per_sec": 0 00:37:59.916 }, 00:37:59.916 "claimed": false, 00:37:59.916 "zoned": false, 00:37:59.916 "supported_io_types": { 00:37:59.916 "read": true, 00:37:59.916 "write": true, 00:37:59.916 "unmap": true, 00:37:59.916 "flush": false, 00:37:59.916 "reset": true, 00:37:59.916 "nvme_admin": false, 00:37:59.916 "nvme_io": false, 00:37:59.916 "nvme_io_md": false, 00:37:59.916 "write_zeroes": true, 00:37:59.916 "zcopy": false, 00:37:59.916 "get_zone_info": false, 00:37:59.916 "zone_management": false, 00:37:59.916 "zone_append": false, 00:37:59.916 "compare": false, 00:37:59.916 "compare_and_write": false, 00:37:59.916 "abort": false, 00:37:59.916 "seek_hole": true, 00:37:59.916 "seek_data": true, 00:37:59.916 "copy": false, 00:37:59.916 "nvme_iov_md": false 00:37:59.916 }, 00:37:59.916 "driver_specific": { 00:37:59.916 "lvol": { 00:37:59.916 "lvol_store_uuid": "1dad6072-671a-476a-849b-b7a797e00981", 00:37:59.916 "base_bdev": "aio_bdev", 00:37:59.916 "thin_provision": false, 00:37:59.916 "num_allocated_clusters": 38, 00:37:59.916 "snapshot": false, 00:37:59.916 "clone": false, 00:37:59.916 "esnap_clone": false 00:37:59.916 } 00:37:59.916 } 00:37:59.916 } 00:37:59.916 ] 00:37:59.916 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:37:59.916 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dad6072-671a-476a-849b-b7a797e00981 00:37:59.916 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:00.176 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:00.176 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dad6072-671a-476a-849b-b7a797e00981 00:38:00.176 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:00.436 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:00.436 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c7249d18-2560-479a-ab5b-57882d1f28e4 00:38:00.436 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1dad6072-671a-476a-849b-b7a797e00981 00:38:00.696 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:00.955 00:38:00.955 real 0m16.760s 00:38:00.955 user 0m33.775s 00:38:00.955 sys 0m4.206s 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:00.955 ************************************ 00:38:00.955 END TEST lvs_grow_dirty 00:38:00.955 ************************************ 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:00.955 nvmf_trace.0 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:00.955 rmmod nvme_tcp 00:38:00.955 rmmod nvme_fabrics 00:38:00.955 rmmod nvme_keyring 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:00.955 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 2309560 ']' 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 2309560 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2309560 ']' 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2309560 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2309560 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2309560' 00:38:01.213 killing process with pid 2309560 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2309560 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2309560 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:01.213 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.745 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:03.745 00:38:03.745 real 0m41.088s 00:38:03.745 user 0m51.280s 00:38:03.745 sys 0m10.238s 00:38:03.745 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:03.745 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:03.745 ************************************ 00:38:03.745 END TEST nvmf_lvs_grow 00:38:03.745 ************************************ 00:38:03.745 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:03.745 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:03.745 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:03.745 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:03.745 ************************************ 00:38:03.745 START TEST nvmf_bdev_io_wait 00:38:03.745 ************************************ 00:38:03.745 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:03.745 * Looking for test storage... 00:38:03.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:03.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.745 --rc genhtml_branch_coverage=1 00:38:03.745 --rc genhtml_function_coverage=1 00:38:03.745 --rc genhtml_legend=1 00:38:03.745 --rc geninfo_all_blocks=1 00:38:03.745 --rc geninfo_unexecuted_blocks=1 00:38:03.745 00:38:03.745 ' 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:03.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.745 --rc genhtml_branch_coverage=1 00:38:03.745 --rc genhtml_function_coverage=1 00:38:03.745 --rc genhtml_legend=1 00:38:03.745 --rc geninfo_all_blocks=1 00:38:03.745 --rc geninfo_unexecuted_blocks=1 00:38:03.745 00:38:03.745 ' 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:03.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.745 --rc genhtml_branch_coverage=1 00:38:03.745 --rc genhtml_function_coverage=1 00:38:03.745 --rc genhtml_legend=1 00:38:03.745 --rc geninfo_all_blocks=1 00:38:03.745 --rc geninfo_unexecuted_blocks=1 00:38:03.745 00:38:03.745 ' 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:03.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.745 --rc genhtml_branch_coverage=1 00:38:03.745 --rc genhtml_function_coverage=1 00:38:03.745 --rc genhtml_legend=1 00:38:03.745 --rc geninfo_all_blocks=1 00:38:03.745 --rc geninfo_unexecuted_blocks=1 00:38:03.745 00:38:03.745 ' 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:03.745 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:03.746 11:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:09.020 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:09.021 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:09.021 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:09.021 Found net devices under 0000:af:00.0: cvl_0_0 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:09.021 Found net devices under 0000:af:00.1: cvl_0_1 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:09.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:09.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:38:09.021 00:38:09.021 --- 10.0.0.2 ping statistics --- 00:38:09.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:09.021 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:09.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:09.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:38:09.021 00:38:09.021 --- 10.0.0.1 ping statistics --- 00:38:09.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:09.021 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:09.021 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=2313651 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 2313651 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2313651 ']' 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:09.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:09.022 [2024-10-06 11:33:06.387798] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:09.022 [2024-10-06 11:33:06.388678] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:38:09.022 [2024-10-06 11:33:06.388709] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:09.022 [2024-10-06 11:33:06.446877] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:09.022 [2024-10-06 11:33:06.487850] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:09.022 [2024-10-06 11:33:06.487894] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:09.022 [2024-10-06 11:33:06.487900] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:09.022 [2024-10-06 11:33:06.487906] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:09.022 [2024-10-06 11:33:06.487911] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:09.022 [2024-10-06 11:33:06.489273] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:38:09.022 [2024-10-06 11:33:06.489371] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:38:09.022 [2024-10-06 11:33:06.489480] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:38:09.022 [2024-10-06 11:33:06.489481] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:09.022 [2024-10-06 11:33:06.489776] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.022 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:09.282 [2024-10-06 11:33:06.632715] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:09.282 [2024-10-06 11:33:06.632795] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:09.282 [2024-10-06 11:33:06.633378] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:09.282 [2024-10-06 11:33:06.633784] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:09.282 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:09.283 [2024-10-06 11:33:06.645947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:09.283 Malloc0 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:09.283 [2024-10-06 11:33:06.726172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2313677 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2313679 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:09.283 { 00:38:09.283 "params": { 00:38:09.283 "name": "Nvme$subsystem", 00:38:09.283 "trtype": "$TEST_TRANSPORT", 00:38:09.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:09.283 "adrfam": "ipv4", 00:38:09.283 "trsvcid": "$NVMF_PORT", 00:38:09.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:09.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:09.283 "hdgst": ${hdgst:-false}, 00:38:09.283 "ddgst": ${ddgst:-false} 00:38:09.283 }, 00:38:09.283 "method": "bdev_nvme_attach_controller" 00:38:09.283 } 00:38:09.283 EOF 00:38:09.283 )") 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2313681 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:09.283 { 00:38:09.283 "params": { 00:38:09.283 "name": "Nvme$subsystem", 00:38:09.283 "trtype": "$TEST_TRANSPORT", 00:38:09.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:09.283 "adrfam": "ipv4", 00:38:09.283 "trsvcid": "$NVMF_PORT", 00:38:09.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:09.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:09.283 "hdgst": ${hdgst:-false}, 00:38:09.283 "ddgst": ${ddgst:-false} 00:38:09.283 }, 00:38:09.283 "method": "bdev_nvme_attach_controller" 00:38:09.283 } 00:38:09.283 EOF 00:38:09.283 )") 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2313684 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:09.283 { 00:38:09.283 "params": { 00:38:09.283 "name": "Nvme$subsystem", 00:38:09.283 "trtype": "$TEST_TRANSPORT", 00:38:09.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:09.283 "adrfam": "ipv4", 00:38:09.283 "trsvcid": "$NVMF_PORT", 00:38:09.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:09.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:09.283 "hdgst": ${hdgst:-false}, 00:38:09.283 "ddgst": ${ddgst:-false} 00:38:09.283 }, 00:38:09.283 "method": "bdev_nvme_attach_controller" 00:38:09.283 } 00:38:09.283 EOF 00:38:09.283 )") 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:09.283 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:09.284 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:09.284 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:09.284 { 00:38:09.284 "params": { 00:38:09.284 "name": "Nvme$subsystem", 00:38:09.284 "trtype": "$TEST_TRANSPORT", 00:38:09.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:09.284 "adrfam": "ipv4", 00:38:09.284 "trsvcid": "$NVMF_PORT", 00:38:09.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:09.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:09.284 "hdgst": ${hdgst:-false}, 00:38:09.284 "ddgst": ${ddgst:-false} 00:38:09.284 }, 00:38:09.284 "method": "bdev_nvme_attach_controller" 00:38:09.284 } 00:38:09.284 EOF 00:38:09.284 )") 00:38:09.284 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:09.284 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2313677 00:38:09.284 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:09.284 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:09.284 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:09.284 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:09.284 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:09.284 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:09.284 "params": { 00:38:09.284 "name": "Nvme1", 00:38:09.284 "trtype": "tcp", 00:38:09.284 "traddr": "10.0.0.2", 00:38:09.284 "adrfam": "ipv4", 00:38:09.284 "trsvcid": "4420", 00:38:09.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:09.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:09.284 "hdgst": false, 00:38:09.284 "ddgst": false 00:38:09.284 }, 00:38:09.284 "method": "bdev_nvme_attach_controller" 00:38:09.284 }' 00:38:09.284 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:09.284 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:09.284 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:09.284 "params": { 00:38:09.284 "name": "Nvme1", 00:38:09.284 "trtype": "tcp", 00:38:09.284 "traddr": "10.0.0.2", 00:38:09.284 "adrfam": "ipv4", 00:38:09.284 "trsvcid": "4420", 00:38:09.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:09.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:09.284 "hdgst": false, 00:38:09.284 "ddgst": false 00:38:09.284 }, 00:38:09.284 "method": "bdev_nvme_attach_controller" 00:38:09.284 }' 00:38:09.284 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:09.284 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:09.284 "params": { 00:38:09.284 "name": "Nvme1", 00:38:09.284 "trtype": "tcp", 00:38:09.284 "traddr": "10.0.0.2", 00:38:09.284 "adrfam": "ipv4", 00:38:09.284 "trsvcid": "4420", 00:38:09.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:09.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:09.284 "hdgst": false, 00:38:09.284 "ddgst": false 00:38:09.284 }, 00:38:09.284 "method": "bdev_nvme_attach_controller" 00:38:09.284 }' 00:38:09.284 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:09.284 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:09.284 "params": { 00:38:09.284 "name": "Nvme1", 00:38:09.284 "trtype": "tcp", 00:38:09.284 "traddr": "10.0.0.2", 00:38:09.284 "adrfam": "ipv4", 00:38:09.284 "trsvcid": "4420", 00:38:09.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:09.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:09.284 "hdgst": false, 00:38:09.284 "ddgst": false 00:38:09.284 }, 00:38:09.284 "method": "bdev_nvme_attach_controller" 00:38:09.284 }' 00:38:09.284 [2024-10-06 11:33:06.775708] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:38:09.284 [2024-10-06 11:33:06.775759] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:38:09.284 [2024-10-06 11:33:06.779240] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:38:09.284 [2024-10-06 11:33:06.779278] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:09.284 [2024-10-06 11:33:06.779409] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:38:09.284 [2024-10-06 11:33:06.779447] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:38:09.284 [2024-10-06 11:33:06.779502] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:38:09.284 [2024-10-06 11:33:06.779536] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:38:09.544 [2024-10-06 11:33:06.942378] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:09.544 [2024-10-06 11:33:06.972627] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:38:09.544 [2024-10-06 11:33:07.040973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:09.544 [2024-10-06 11:33:07.070644] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:38:09.804 [2024-10-06 11:33:07.140670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:09.804 [2024-10-06 11:33:07.170720] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:38:09.804 [2024-10-06 11:33:07.241570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:09.804 [2024-10-06 11:33:07.274126] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:38:10.063 Running I/O for 1 seconds... 00:38:10.063 Running I/O for 1 seconds... 00:38:10.322 Running I/O for 1 seconds... 00:38:10.322 Running I/O for 1 seconds... 00:38:11.271 15256.00 IOPS, 59.59 MiB/s 00:38:11.271 Latency(us) 00:38:11.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:11.271 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:11.271 Nvme1n1 : 1.01 15318.88 59.84 0.00 0.00 8329.06 4244.24 10236.10 00:38:11.271 =================================================================================================================== 00:38:11.271 Total : 15318.88 59.84 0.00 0.00 8329.06 4244.24 10236.10 00:38:11.271 6375.00 IOPS, 24.90 MiB/s 00:38:11.271 Latency(us) 00:38:11.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:11.271 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:11.271 Nvme1n1 : 1.01 6413.17 25.05 0.00 0.00 19780.22 2949.12 29959.31 00:38:11.271 =================================================================================================================== 00:38:11.271 Total : 6413.17 25.05 0.00 0.00 19780.22 2949.12 29959.31 00:38:11.271 6816.00 IOPS, 26.62 MiB/s 00:38:11.271 Latency(us) 00:38:11.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:11.271 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:11.271 Nvme1n1 : 1.01 6907.75 26.98 0.00 0.00 18474.08 4649.94 39446.43 00:38:11.271 =================================================================================================================== 00:38:11.271 Total : 6907.75 26.98 0.00 0.00 18474.08 4649.94 39446.43 00:38:11.271 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2313679 00:38:11.271 253048.00 IOPS, 988.47 MiB/s 00:38:11.271 Latency(us) 00:38:11.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:11.271 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:11.271 Nvme1n1 : 1.00 252665.13 986.97 0.00 0.00 504.22 232.11 1505.77 00:38:11.271 =================================================================================================================== 00:38:11.271 Total : 252665.13 986.97 0.00 0.00 504.22 232.11 1505.77 00:38:11.530 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2313681 00:38:11.530 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2313684 00:38:11.530 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:11.530 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:11.530 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:11.530 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:11.530 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:11.530 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:11.530 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:11.530 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:11.530 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:11.530 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:11.530 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:11.530 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:11.530 rmmod nvme_tcp 00:38:11.530 rmmod nvme_fabrics 00:38:11.530 rmmod nvme_keyring 00:38:11.530 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:11.530 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:11.530 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:11.530 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 2313651 ']' 00:38:11.530 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 2313651 00:38:11.530 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2313651 ']' 00:38:11.530 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2313651 00:38:11.530 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:38:11.530 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:11.530 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2313651 00:38:11.789 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:11.789 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:11.789 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2313651' 00:38:11.789 killing process with pid 2313651 00:38:11.789 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2313651 00:38:11.789 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2313651 00:38:11.789 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:11.789 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:11.789 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:11.789 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:11.789 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:38:11.789 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:11.789 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:38:11.789 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:11.789 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:11.789 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:11.789 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:11.789 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:14.327 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:14.327 00:38:14.327 real 0m10.448s 00:38:14.327 user 0m16.313s 00:38:14.327 sys 0m6.275s 00:38:14.327 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:14.327 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:14.327 ************************************ 00:38:14.327 END TEST nvmf_bdev_io_wait 00:38:14.327 ************************************ 00:38:14.327 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:14.327 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:14.327 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:14.327 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:14.327 ************************************ 00:38:14.327 START TEST nvmf_queue_depth 00:38:14.327 ************************************ 00:38:14.327 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:14.327 * Looking for test storage... 00:38:14.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:14.327 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:14.327 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:38:14.327 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:14.327 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:14.327 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:14.327 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:14.327 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:14.327 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:14.327 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:14.327 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:14.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.328 --rc genhtml_branch_coverage=1 00:38:14.328 --rc genhtml_function_coverage=1 00:38:14.328 --rc genhtml_legend=1 00:38:14.328 --rc geninfo_all_blocks=1 00:38:14.328 --rc geninfo_unexecuted_blocks=1 00:38:14.328 00:38:14.328 ' 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:14.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.328 --rc genhtml_branch_coverage=1 00:38:14.328 --rc genhtml_function_coverage=1 00:38:14.328 --rc genhtml_legend=1 00:38:14.328 --rc geninfo_all_blocks=1 00:38:14.328 --rc geninfo_unexecuted_blocks=1 00:38:14.328 00:38:14.328 ' 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:14.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.328 --rc genhtml_branch_coverage=1 00:38:14.328 --rc genhtml_function_coverage=1 00:38:14.328 --rc genhtml_legend=1 00:38:14.328 --rc geninfo_all_blocks=1 00:38:14.328 --rc geninfo_unexecuted_blocks=1 00:38:14.328 00:38:14.328 ' 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:14.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.328 --rc genhtml_branch_coverage=1 00:38:14.328 --rc genhtml_function_coverage=1 00:38:14.328 --rc genhtml_legend=1 00:38:14.328 --rc geninfo_all_blocks=1 00:38:14.328 --rc geninfo_unexecuted_blocks=1 00:38:14.328 00:38:14.328 ' 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:14.328 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:14.329 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:14.329 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:14.329 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:14.329 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:14.329 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:14.329 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:14.329 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:14.329 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:14.329 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:14.329 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:14.329 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:14.329 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:14.329 11:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:19.606 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:19.606 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:19.606 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:19.607 Found net devices under 0000:af:00.0: cvl_0_0 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:19.607 Found net devices under 0000:af:00.1: cvl_0_1 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:19.607 11:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:19.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:19.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:38:19.607 00:38:19.607 --- 10.0.0.2 ping statistics --- 00:38:19.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:19.607 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:19.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:19.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:38:19.607 00:38:19.607 --- 10.0.0.1 ping statistics --- 00:38:19.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:19.607 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=2317951 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 2317951 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2317951 ']' 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:19.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:19.607 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:19.607 [2024-10-06 11:33:17.168950] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:19.607 [2024-10-06 11:33:17.169885] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:38:19.607 [2024-10-06 11:33:17.169922] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:19.865 [2024-10-06 11:33:17.231239] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:19.865 [2024-10-06 11:33:17.270734] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:19.865 [2024-10-06 11:33:17.270775] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:19.865 [2024-10-06 11:33:17.270785] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:19.865 [2024-10-06 11:33:17.270791] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:19.865 [2024-10-06 11:33:17.270796] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:19.865 [2024-10-06 11:33:17.271307] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:38:19.866 [2024-10-06 11:33:17.332417] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:19.866 [2024-10-06 11:33:17.332627] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:19.866 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:19.866 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:38:19.866 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:19.866 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:19.866 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:19.866 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:19.866 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:19.866 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.866 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:19.866 [2024-10-06 11:33:17.399816] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:19.866 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.866 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:19.866 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.866 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:19.866 Malloc0 00:38:19.866 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:20.124 [2024-10-06 11:33:17.467858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2318010 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2318010 /var/tmp/bdevperf.sock 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2318010 ']' 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:20.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:20.124 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:20.124 [2024-10-06 11:33:17.515615] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:38:20.124 [2024-10-06 11:33:17.515657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2318010 ] 00:38:20.124 [2024-10-06 11:33:17.569841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.124 [2024-10-06 11:33:17.608553] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:20.383 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:20.383 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:38:20.383 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:20.383 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.383 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:20.383 NVMe0n1 00:38:20.383 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.383 11:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:20.641 Running I/O for 10 seconds... 00:38:30.567 12280.00 IOPS, 47.97 MiB/s 12294.50 IOPS, 48.03 MiB/s 12380.67 IOPS, 48.36 MiB/s 12542.00 IOPS, 48.99 MiB/s 12522.60 IOPS, 48.92 MiB/s 12633.33 IOPS, 49.35 MiB/s 12672.14 IOPS, 49.50 MiB/s 12689.38 IOPS, 49.57 MiB/s 12749.22 IOPS, 49.80 MiB/s 12788.30 IOPS, 49.95 MiB/s 00:38:30.567 Latency(us) 00:38:30.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:30.567 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:30.567 Verification LBA range: start 0x0 length 0x4000 00:38:30.567 NVMe0n1 : 10.07 12801.10 50.00 0.00 0.00 79714.43 18724.57 49682.53 00:38:30.567 =================================================================================================================== 00:38:30.567 Total : 12801.10 50.00 0.00 0.00 79714.43 18724.57 49682.53 00:38:30.567 { 00:38:30.567 "results": [ 00:38:30.567 { 00:38:30.567 "job": "NVMe0n1", 00:38:30.567 "core_mask": "0x1", 00:38:30.567 "workload": "verify", 00:38:30.567 "status": "finished", 00:38:30.567 "verify_range": { 00:38:30.567 "start": 0, 00:38:30.567 "length": 16384 00:38:30.567 }, 00:38:30.567 "queue_depth": 1024, 00:38:30.567 "io_size": 4096, 00:38:30.567 "runtime": 10.065382, 00:38:30.567 "iops": 12801.103822984562, 00:38:30.567 "mibps": 50.004311808533444, 00:38:30.567 "io_failed": 0, 00:38:30.567 "io_timeout": 0, 00:38:30.567 "avg_latency_us": 79714.43116969126, 00:38:30.567 "min_latency_us": 18724.571428571428, 00:38:30.567 "max_latency_us": 49682.52952380952 00:38:30.567 } 00:38:30.567 ], 00:38:30.567 "core_count": 1 00:38:30.567 } 00:38:30.567 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2318010 00:38:30.567 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2318010 ']' 00:38:30.567 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2318010 00:38:30.567 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:38:30.567 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:30.567 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2318010 00:38:30.567 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:30.567 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:30.567 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2318010' 00:38:30.567 killing process with pid 2318010 00:38:30.567 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2318010 00:38:30.567 Received shutdown signal, test time was about 10.000000 seconds 00:38:30.567 00:38:30.567 Latency(us) 00:38:30.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:30.567 =================================================================================================================== 00:38:30.567 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:30.567 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2318010 00:38:30.827 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:30.827 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:30.827 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:30.827 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:30.827 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:30.827 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:30.827 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:30.827 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:30.827 rmmod nvme_tcp 00:38:30.827 rmmod nvme_fabrics 00:38:30.827 rmmod nvme_keyring 00:38:30.827 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:30.827 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:30.827 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:30.827 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 2317951 ']' 00:38:30.827 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 2317951 00:38:30.827 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2317951 ']' 00:38:30.827 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2317951 00:38:30.827 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:38:30.827 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:30.827 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2317951 00:38:31.086 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:31.086 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:31.086 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2317951' 00:38:31.086 killing process with pid 2317951 00:38:31.086 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2317951 00:38:31.086 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2317951 00:38:31.086 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:31.086 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:31.086 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:31.086 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:31.086 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:38:31.086 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:31.086 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:38:31.086 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:31.086 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:31.086 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:31.086 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:31.086 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:33.625 00:38:33.625 real 0m19.270s 00:38:33.625 user 0m22.731s 00:38:33.625 sys 0m5.858s 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:33.625 ************************************ 00:38:33.625 END TEST nvmf_queue_depth 00:38:33.625 ************************************ 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:33.625 ************************************ 00:38:33.625 START TEST nvmf_target_multipath 00:38:33.625 ************************************ 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:33.625 * Looking for test storage... 00:38:33.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:33.625 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:33.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.626 --rc genhtml_branch_coverage=1 00:38:33.626 --rc genhtml_function_coverage=1 00:38:33.626 --rc genhtml_legend=1 00:38:33.626 --rc geninfo_all_blocks=1 00:38:33.626 --rc geninfo_unexecuted_blocks=1 00:38:33.626 00:38:33.626 ' 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:33.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.626 --rc genhtml_branch_coverage=1 00:38:33.626 --rc genhtml_function_coverage=1 00:38:33.626 --rc genhtml_legend=1 00:38:33.626 --rc geninfo_all_blocks=1 00:38:33.626 --rc geninfo_unexecuted_blocks=1 00:38:33.626 00:38:33.626 ' 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:33.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.626 --rc genhtml_branch_coverage=1 00:38:33.626 --rc genhtml_function_coverage=1 00:38:33.626 --rc genhtml_legend=1 00:38:33.626 --rc geninfo_all_blocks=1 00:38:33.626 --rc geninfo_unexecuted_blocks=1 00:38:33.626 00:38:33.626 ' 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:33.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.626 --rc genhtml_branch_coverage=1 00:38:33.626 --rc genhtml_function_coverage=1 00:38:33.626 --rc genhtml_legend=1 00:38:33.626 --rc geninfo_all_blocks=1 00:38:33.626 --rc geninfo_unexecuted_blocks=1 00:38:33.626 00:38:33.626 ' 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:38:33.626 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:38.906 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:38.906 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:38.906 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:38.907 Found net devices under 0000:af:00.0: cvl_0_0 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:38.907 Found net devices under 0000:af:00.1: cvl_0_1 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:38.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:38.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:38:38.907 00:38:38.907 --- 10.0.0.2 ping statistics --- 00:38:38.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:38.907 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:38.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:38.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:38:38.907 00:38:38.907 --- 10.0.0.1 ping statistics --- 00:38:38.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:38.907 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:38.907 only one NIC for nvmf test 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:38.907 rmmod nvme_tcp 00:38:38.907 rmmod nvme_fabrics 00:38:38.907 rmmod nvme_keyring 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:38.907 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:41.446 00:38:41.446 real 0m7.793s 00:38:41.446 user 0m1.690s 00:38:41.446 sys 0m4.142s 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:41.446 ************************************ 00:38:41.446 END TEST nvmf_target_multipath 00:38:41.446 ************************************ 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:41.446 ************************************ 00:38:41.446 START TEST nvmf_zcopy 00:38:41.446 ************************************ 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:41.446 * Looking for test storage... 00:38:41.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:41.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.446 --rc genhtml_branch_coverage=1 00:38:41.446 --rc genhtml_function_coverage=1 00:38:41.446 --rc genhtml_legend=1 00:38:41.446 --rc geninfo_all_blocks=1 00:38:41.446 --rc geninfo_unexecuted_blocks=1 00:38:41.446 00:38:41.446 ' 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:41.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.446 --rc genhtml_branch_coverage=1 00:38:41.446 --rc genhtml_function_coverage=1 00:38:41.446 --rc genhtml_legend=1 00:38:41.446 --rc geninfo_all_blocks=1 00:38:41.446 --rc geninfo_unexecuted_blocks=1 00:38:41.446 00:38:41.446 ' 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:41.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.446 --rc genhtml_branch_coverage=1 00:38:41.446 --rc genhtml_function_coverage=1 00:38:41.446 --rc genhtml_legend=1 00:38:41.446 --rc geninfo_all_blocks=1 00:38:41.446 --rc geninfo_unexecuted_blocks=1 00:38:41.446 00:38:41.446 ' 00:38:41.446 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:41.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.447 --rc genhtml_branch_coverage=1 00:38:41.447 --rc genhtml_function_coverage=1 00:38:41.447 --rc genhtml_legend=1 00:38:41.447 --rc geninfo_all_blocks=1 00:38:41.447 --rc geninfo_unexecuted_blocks=1 00:38:41.447 00:38:41.447 ' 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:38:41.447 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:46.756 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:46.756 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:38:46.756 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:46.756 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:46.756 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:46.756 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:46.756 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:46.756 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:46.757 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:46.757 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:46.757 Found net devices under 0000:af:00.0: cvl_0_0 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:46.757 Found net devices under 0000:af:00.1: cvl_0_1 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:46.757 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:46.757 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:46.757 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:46.757 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:46.757 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:46.757 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:46.757 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:46.757 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:46.757 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:46.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:46.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:38:46.757 00:38:46.757 --- 10.0.0.2 ping statistics --- 00:38:46.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:46.757 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:38:46.757 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:46.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:46.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:38:46.757 00:38:46.757 --- 10.0.0.1 ping statistics --- 00:38:46.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:46.758 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=2326405 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 2326405 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2326405 ']' 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:46.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:46.758 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:46.758 [2024-10-06 11:33:44.272538] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:46.758 [2024-10-06 11:33:44.273471] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:38:46.758 [2024-10-06 11:33:44.273503] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:47.032 [2024-10-06 11:33:44.334875] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.032 [2024-10-06 11:33:44.372263] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:47.032 [2024-10-06 11:33:44.372308] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:47.032 [2024-10-06 11:33:44.372315] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:47.032 [2024-10-06 11:33:44.372321] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:47.032 [2024-10-06 11:33:44.372326] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:47.032 [2024-10-06 11:33:44.372876] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:38:47.032 [2024-10-06 11:33:44.432991] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:47.032 [2024-10-06 11:33:44.433224] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:47.032 [2024-10-06 11:33:44.497306] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:47.032 [2024-10-06 11:33:44.521513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:47.032 malloc0 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:47.032 { 00:38:47.032 "params": { 00:38:47.032 "name": "Nvme$subsystem", 00:38:47.032 "trtype": "$TEST_TRANSPORT", 00:38:47.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:47.032 "adrfam": "ipv4", 00:38:47.032 "trsvcid": "$NVMF_PORT", 00:38:47.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:47.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:47.032 "hdgst": ${hdgst:-false}, 00:38:47.032 "ddgst": ${ddgst:-false} 00:38:47.032 }, 00:38:47.032 "method": "bdev_nvme_attach_controller" 00:38:47.032 } 00:38:47.032 EOF 00:38:47.032 )") 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:38:47.032 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:47.032 "params": { 00:38:47.032 "name": "Nvme1", 00:38:47.032 "trtype": "tcp", 00:38:47.033 "traddr": "10.0.0.2", 00:38:47.033 "adrfam": "ipv4", 00:38:47.033 "trsvcid": "4420", 00:38:47.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:47.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:47.033 "hdgst": false, 00:38:47.033 "ddgst": false 00:38:47.033 }, 00:38:47.033 "method": "bdev_nvme_attach_controller" 00:38:47.033 }' 00:38:47.291 [2024-10-06 11:33:44.624441] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:38:47.292 [2024-10-06 11:33:44.624486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2326534 ] 00:38:47.292 [2024-10-06 11:33:44.679405] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.292 [2024-10-06 11:33:44.718075] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:47.551 Running I/O for 10 seconds... 00:38:57.809 8076.00 IOPS, 63.09 MiB/s 8192.00 IOPS, 64.00 MiB/s 8228.00 IOPS, 64.28 MiB/s 8286.75 IOPS, 64.74 MiB/s 8298.20 IOPS, 64.83 MiB/s 8306.67 IOPS, 64.90 MiB/s 8333.86 IOPS, 65.11 MiB/s 8358.50 IOPS, 65.30 MiB/s 8370.11 IOPS, 65.39 MiB/s 8379.50 IOPS, 65.46 MiB/s 00:38:57.809 Latency(us) 00:38:57.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:57.809 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:38:57.809 Verification LBA range: start 0x0 length 0x1000 00:38:57.809 Nvme1n1 : 10.01 8381.78 65.48 0.00 0.00 15228.44 2200.14 22594.32 00:38:57.809 =================================================================================================================== 00:38:57.809 Total : 8381.78 65.48 0.00 0.00 15228.44 2200.14 22594.32 00:38:57.809 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2328089 00:38:57.809 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:38:57.809 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:57.809 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:38:57.809 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:38:57.809 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:38:57.809 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:38:57.809 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:57.809 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:57.809 { 00:38:57.809 "params": { 00:38:57.809 "name": "Nvme$subsystem", 00:38:57.809 "trtype": "$TEST_TRANSPORT", 00:38:57.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:57.809 "adrfam": "ipv4", 00:38:57.809 "trsvcid": "$NVMF_PORT", 00:38:57.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:57.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:57.809 "hdgst": ${hdgst:-false}, 00:38:57.809 "ddgst": ${ddgst:-false} 00:38:57.809 }, 00:38:57.809 "method": "bdev_nvme_attach_controller" 00:38:57.809 } 00:38:57.809 EOF 00:38:57.809 )") 00:38:57.809 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:38:57.809 [2024-10-06 11:33:55.245224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.809 [2024-10-06 11:33:55.245257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.809 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:38:57.809 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:38:57.809 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:57.809 "params": { 00:38:57.809 "name": "Nvme1", 00:38:57.809 "trtype": "tcp", 00:38:57.809 "traddr": "10.0.0.2", 00:38:57.809 "adrfam": "ipv4", 00:38:57.809 "trsvcid": "4420", 00:38:57.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:57.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:57.809 "hdgst": false, 00:38:57.809 "ddgst": false 00:38:57.809 }, 00:38:57.809 "method": "bdev_nvme_attach_controller" 00:38:57.809 }' 00:38:57.809 [2024-10-06 11:33:55.257188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.810 [2024-10-06 11:33:55.257201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.810 [2024-10-06 11:33:55.269178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.810 [2024-10-06 11:33:55.269187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.810 [2024-10-06 11:33:55.281178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.810 [2024-10-06 11:33:55.281192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.810 [2024-10-06 11:33:55.283788] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:38:57.810 [2024-10-06 11:33:55.283828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2328089 ] 00:38:57.810 [2024-10-06 11:33:55.293178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.810 [2024-10-06 11:33:55.293188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.810 [2024-10-06 11:33:55.305177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.810 [2024-10-06 11:33:55.305187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.810 [2024-10-06 11:33:55.317179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.810 [2024-10-06 11:33:55.317189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.810 [2024-10-06 11:33:55.329179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.810 [2024-10-06 11:33:55.329187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.810 [2024-10-06 11:33:55.339451] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:57.810 [2024-10-06 11:33:55.341180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.810 [2024-10-06 11:33:55.341190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.810 [2024-10-06 11:33:55.353183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.810 [2024-10-06 11:33:55.353197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.810 [2024-10-06 11:33:55.365186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.810 [2024-10-06 11:33:55.365207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.810 [2024-10-06 11:33:55.377180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.810 [2024-10-06 11:33:55.377192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.810 [2024-10-06 11:33:55.378460] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:58.070 [2024-10-06 11:33:55.389184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.389199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.401184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.401201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.413182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.413196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.425179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.425190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.437192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.437213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.449178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.449190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.461198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.461219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.473186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.473208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.485182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.485197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.497181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.497191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.509192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.509202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.521181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.521191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.533183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.533196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.545181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.545195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.557184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.557202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 Running I/O for 5 seconds... 00:38:58.070 [2024-10-06 11:33:55.574785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.574805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.589399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.589418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.600752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.600770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.614181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.614199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.629401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.629420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.070 [2024-10-06 11:33:55.640485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.070 [2024-10-06 11:33:55.640503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.330 [2024-10-06 11:33:55.653865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.330 [2024-10-06 11:33:55.653884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.330 [2024-10-06 11:33:55.665404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.330 [2024-10-06 11:33:55.665423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.330 [2024-10-06 11:33:55.677990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.331 [2024-10-06 11:33:55.678009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.331 [2024-10-06 11:33:55.692869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.331 [2024-10-06 11:33:55.692888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.331 [2024-10-06 11:33:55.705544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.331 [2024-10-06 11:33:55.705561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.331 [2024-10-06 11:33:55.717475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.331 [2024-10-06 11:33:55.717497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.331 [2024-10-06 11:33:55.730264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.331 [2024-10-06 11:33:55.730292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.331 [2024-10-06 11:33:55.745156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.331 [2024-10-06 11:33:55.745174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.331 [2024-10-06 11:33:55.756421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.331 [2024-10-06 11:33:55.756439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.331 [2024-10-06 11:33:55.770526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.331 [2024-10-06 11:33:55.770545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.331 [2024-10-06 11:33:55.785884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.331 [2024-10-06 11:33:55.785901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.331 [2024-10-06 11:33:55.801007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.331 [2024-10-06 11:33:55.801025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.331 [2024-10-06 11:33:55.814549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.331 [2024-10-06 11:33:55.814567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.331 [2024-10-06 11:33:55.829007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.331 [2024-10-06 11:33:55.829025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.331 [2024-10-06 11:33:55.840031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.331 [2024-10-06 11:33:55.840048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.331 [2024-10-06 11:33:55.854757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.331 [2024-10-06 11:33:55.854775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.331 [2024-10-06 11:33:55.869256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.331 [2024-10-06 11:33:55.869275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.331 [2024-10-06 11:33:55.880171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.331 [2024-10-06 11:33:55.880188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.331 [2024-10-06 11:33:55.894788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.331 [2024-10-06 11:33:55.894805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.591 [2024-10-06 11:33:55.910029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.591 [2024-10-06 11:33:55.910048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.591 [2024-10-06 11:33:55.925202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.591 [2024-10-06 11:33:55.925220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.591 [2024-10-06 11:33:55.937545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.591 [2024-10-06 11:33:55.937562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.592 [2024-10-06 11:33:55.953303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.592 [2024-10-06 11:33:55.953320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.592 [2024-10-06 11:33:55.964898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.592 [2024-10-06 11:33:55.964916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.592 [2024-10-06 11:33:55.979115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.592 [2024-10-06 11:33:55.979138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.592 [2024-10-06 11:33:55.993867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.592 [2024-10-06 11:33:55.993884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.592 [2024-10-06 11:33:56.004974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.592 [2024-10-06 11:33:56.004992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.592 [2024-10-06 11:33:56.018934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.592 [2024-10-06 11:33:56.018952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.592 [2024-10-06 11:33:56.033504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.592 [2024-10-06 11:33:56.033521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.592 [2024-10-06 11:33:56.049263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.592 [2024-10-06 11:33:56.049281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.592 [2024-10-06 11:33:56.061873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.592 [2024-10-06 11:33:56.061891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.592 [2024-10-06 11:33:56.077159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.592 [2024-10-06 11:33:56.077178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.592 [2024-10-06 11:33:56.089920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.592 [2024-10-06 11:33:56.089938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.592 [2024-10-06 11:33:56.105307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.592 [2024-10-06 11:33:56.105325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.592 [2024-10-06 11:33:56.116012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.592 [2024-10-06 11:33:56.116029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.592 [2024-10-06 11:33:56.131030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.592 [2024-10-06 11:33:56.131048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.592 [2024-10-06 11:33:56.145269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.592 [2024-10-06 11:33:56.145286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.592 [2024-10-06 11:33:56.155972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.592 [2024-10-06 11:33:56.155990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.170543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.170568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.184940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.184959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.196616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.196634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.210332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.210350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.225475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.225492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.241435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.241453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.252403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.252421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.266865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.266884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.281248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.281266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.293168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.293186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.306038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.306057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.317516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.317535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.330230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.330252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.345277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.345295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.356848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.356865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.370041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.370064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.381117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.381136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.394371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.394389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.409228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.409247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.852 [2024-10-06 11:33:56.420606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.852 [2024-10-06 11:33:56.420624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.112 [2024-10-06 11:33:56.434659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.112 [2024-10-06 11:33:56.434677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.112 [2024-10-06 11:33:56.449847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.112 [2024-10-06 11:33:56.449865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.112 [2024-10-06 11:33:56.464856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.112 [2024-10-06 11:33:56.464875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.112 [2024-10-06 11:33:56.476998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.112 [2024-10-06 11:33:56.477015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.112 [2024-10-06 11:33:56.489688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.112 [2024-10-06 11:33:56.489705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.112 [2024-10-06 11:33:56.501708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.112 [2024-10-06 11:33:56.501726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.112 [2024-10-06 11:33:56.514604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.112 [2024-10-06 11:33:56.514621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.112 [2024-10-06 11:33:56.529638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.112 [2024-10-06 11:33:56.529655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.112 [2024-10-06 11:33:56.545196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.112 [2024-10-06 11:33:56.545215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.112 [2024-10-06 11:33:56.559261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.112 [2024-10-06 11:33:56.559279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.112 16436.00 IOPS, 128.41 MiB/s [2024-10-06 11:33:56.573997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.112 [2024-10-06 11:33:56.574015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.112 [2024-10-06 11:33:56.588888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.112 [2024-10-06 11:33:56.588906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.112 [2024-10-06 11:33:56.600244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.112 [2024-10-06 11:33:56.600261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.112 [2024-10-06 11:33:56.614436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.112 [2024-10-06 11:33:56.614454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.112 [2024-10-06 11:33:56.629201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.112 [2024-10-06 11:33:56.629219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.112 [2024-10-06 11:33:56.640564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.112 [2024-10-06 11:33:56.640582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.112 [2024-10-06 11:33:56.654497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.112 [2024-10-06 11:33:56.654516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.112 [2024-10-06 11:33:56.669105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.112 [2024-10-06 11:33:56.669124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.113 [2024-10-06 11:33:56.683055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.113 [2024-10-06 11:33:56.683080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.697021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.697038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.709884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.709901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.721189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.721207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.734570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.734588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.749432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.749450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.760476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.760493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.775403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.775421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.790044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.790066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.805731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.805750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.817215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.817233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.830492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.830510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.845306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.845325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.856972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.856990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.870480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.870498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.884954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.884973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.896081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.896114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.910705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.910724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.925373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.925392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.373 [2024-10-06 11:33:56.940959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.373 [2024-10-06 11:33:56.940978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.632 [2024-10-06 11:33:56.953837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.632 [2024-10-06 11:33:56.953855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.633 [2024-10-06 11:33:56.969796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.633 [2024-10-06 11:33:56.969814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.633 [2024-10-06 11:33:56.985240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.633 [2024-10-06 11:33:56.985258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.633 [2024-10-06 11:33:56.996558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.633 [2024-10-06 11:33:56.996582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.633 [2024-10-06 11:33:57.010581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.633 [2024-10-06 11:33:57.010599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.633 [2024-10-06 11:33:57.025157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.633 [2024-10-06 11:33:57.025174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.633 [2024-10-06 11:33:57.036109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.633 [2024-10-06 11:33:57.036127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.633 [2024-10-06 11:33:57.050496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.633 [2024-10-06 11:33:57.050514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.633 [2024-10-06 11:33:57.065128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.633 [2024-10-06 11:33:57.065146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.633 [2024-10-06 11:33:57.076772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.633 [2024-10-06 11:33:57.076791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.633 [2024-10-06 11:33:57.090349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.633 [2024-10-06 11:33:57.090367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.633 [2024-10-06 11:33:57.104910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.633 [2024-10-06 11:33:57.104927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.633 [2024-10-06 11:33:57.118440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.633 [2024-10-06 11:33:57.118459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.633 [2024-10-06 11:33:57.132924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.633 [2024-10-06 11:33:57.132942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.633 [2024-10-06 11:33:57.144174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.633 [2024-10-06 11:33:57.144192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.633 [2024-10-06 11:33:57.158611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.633 [2024-10-06 11:33:57.158630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.633 [2024-10-06 11:33:57.173149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.633 [2024-10-06 11:33:57.173175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.633 [2024-10-06 11:33:57.184874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.633 [2024-10-06 11:33:57.184892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.633 [2024-10-06 11:33:57.199180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.633 [2024-10-06 11:33:57.199199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.213750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.213768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.229007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.229025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.241734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.241751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.257302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.257325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.268827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.268846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.282371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.282389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.297257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.297275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.308841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.308859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.321589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.321606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.332576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.332595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.345974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.345993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.360790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.360808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.373964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.373981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.389718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.389735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.405185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.405203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.418041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.418065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.433560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.433578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.446153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.446170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.892 [2024-10-06 11:33:57.461285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.892 [2024-10-06 11:33:57.461314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.151 [2024-10-06 11:33:57.471932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.471951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-10-06 11:33:57.486573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.486591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-10-06 11:33:57.501170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.501188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-10-06 11:33:57.513165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.513187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-10-06 11:33:57.526310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.526328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-10-06 11:33:57.541201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.541219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-10-06 11:33:57.552903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.552921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-10-06 11:33:57.566142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.566160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 16482.00 IOPS, 128.77 MiB/s [2024-10-06 11:33:57.581739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.581757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-10-06 11:33:57.592846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.592864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-10-06 11:33:57.606378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.606397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-10-06 11:33:57.620569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.620587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-10-06 11:33:57.635248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.635266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-10-06 11:33:57.649918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.649935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-10-06 11:33:57.660439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.660456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-10-06 11:33:57.675169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.675187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-10-06 11:33:57.689336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.689353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-10-06 11:33:57.700904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.700921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-10-06 11:33:57.715485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-10-06 11:33:57.715503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.411 [2024-10-06 11:33:57.730460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.730480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.412 [2024-10-06 11:33:57.744595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.744613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.412 [2024-10-06 11:33:57.757671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.757689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.412 [2024-10-06 11:33:57.772930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.772949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.412 [2024-10-06 11:33:57.786671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.786688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.412 [2024-10-06 11:33:57.801433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.801451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.412 [2024-10-06 11:33:57.812500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.812518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.412 [2024-10-06 11:33:57.826160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.826179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.412 [2024-10-06 11:33:57.840669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.840687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.412 [2024-10-06 11:33:57.851863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.851880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.412 [2024-10-06 11:33:57.866725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.866743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.412 [2024-10-06 11:33:57.880980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.880997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.412 [2024-10-06 11:33:57.892503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.892521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.412 [2024-10-06 11:33:57.907194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.907212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.412 [2024-10-06 11:33:57.921859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.921877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.412 [2024-10-06 11:33:57.937288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.937306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.412 [2024-10-06 11:33:57.948366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.948383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.412 [2024-10-06 11:33:57.962064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.962082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.412 [2024-10-06 11:33:57.977239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.412 [2024-10-06 11:33:57.977256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.671 [2024-10-06 11:33:57.988684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.671 [2024-10-06 11:33:57.988703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.671 [2024-10-06 11:33:58.002895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.671 [2024-10-06 11:33:58.002913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.671 [2024-10-06 11:33:58.017048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.671 [2024-10-06 11:33:58.017071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.671 [2024-10-06 11:33:58.029641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.671 [2024-10-06 11:33:58.029659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.671 [2024-10-06 11:33:58.042055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.671 [2024-10-06 11:33:58.042079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.671 [2024-10-06 11:33:58.056989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.671 [2024-10-06 11:33:58.057008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.671 [2024-10-06 11:33:58.071286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.671 [2024-10-06 11:33:58.071305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.671 [2024-10-06 11:33:58.085349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.671 [2024-10-06 11:33:58.085367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.671 [2024-10-06 11:33:58.097029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.671 [2024-10-06 11:33:58.097047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.671 [2024-10-06 11:33:58.110613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.671 [2024-10-06 11:33:58.110630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.671 [2024-10-06 11:33:58.124866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.671 [2024-10-06 11:33:58.124883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.672 [2024-10-06 11:33:58.136447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.672 [2024-10-06 11:33:58.136465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.672 [2024-10-06 11:33:58.150288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.672 [2024-10-06 11:33:58.150305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.672 [2024-10-06 11:33:58.165300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.672 [2024-10-06 11:33:58.165318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.672 [2024-10-06 11:33:58.176353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.672 [2024-10-06 11:33:58.176371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.672 [2024-10-06 11:33:58.189914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.672 [2024-10-06 11:33:58.189932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.672 [2024-10-06 11:33:58.201359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.672 [2024-10-06 11:33:58.201377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.672 [2024-10-06 11:33:58.214240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.672 [2024-10-06 11:33:58.214258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.672 [2024-10-06 11:33:58.229422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.672 [2024-10-06 11:33:58.229441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.672 [2024-10-06 11:33:58.241246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.672 [2024-10-06 11:33:58.241264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.930 [2024-10-06 11:33:58.253864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.930 [2024-10-06 11:33:58.253883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.930 [2024-10-06 11:33:58.269548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.930 [2024-10-06 11:33:58.269566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.930 [2024-10-06 11:33:58.285056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.930 [2024-10-06 11:33:58.285079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.930 [2024-10-06 11:33:58.299094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.930 [2024-10-06 11:33:58.299114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.930 [2024-10-06 11:33:58.313223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.930 [2024-10-06 11:33:58.313241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.930 [2024-10-06 11:33:58.324293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.930 [2024-10-06 11:33:58.324312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.930 [2024-10-06 11:33:58.338277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.930 [2024-10-06 11:33:58.338296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.930 [2024-10-06 11:33:58.353365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.930 [2024-10-06 11:33:58.353386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.930 [2024-10-06 11:33:58.365068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.930 [2024-10-06 11:33:58.365087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.930 [2024-10-06 11:33:58.378577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.930 [2024-10-06 11:33:58.378595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.930 [2024-10-06 11:33:58.393760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.930 [2024-10-06 11:33:58.393778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.930 [2024-10-06 11:33:58.409272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.930 [2024-10-06 11:33:58.409290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.930 [2024-10-06 11:33:58.420468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.930 [2024-10-06 11:33:58.420485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.930 [2024-10-06 11:33:58.434712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.930 [2024-10-06 11:33:58.434730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.930 [2024-10-06 11:33:58.449538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.930 [2024-10-06 11:33:58.449556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.930 [2024-10-06 11:33:58.465330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.930 [2024-10-06 11:33:58.465348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.930 [2024-10-06 11:33:58.476638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.930 [2024-10-06 11:33:58.476655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.930 [2024-10-06 11:33:58.491018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.930 [2024-10-06 11:33:58.491036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 [2024-10-06 11:33:58.505738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.505757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 [2024-10-06 11:33:58.517375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.517393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 [2024-10-06 11:33:58.530617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.530639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 [2024-10-06 11:33:58.545396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.545413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 [2024-10-06 11:33:58.560947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.560966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 [2024-10-06 11:33:58.572227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.572245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 16517.00 IOPS, 129.04 MiB/s [2024-10-06 11:33:58.586539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.586557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 [2024-10-06 11:33:58.601145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.601163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 [2024-10-06 11:33:58.612425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.612443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 [2024-10-06 11:33:58.626030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.626048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 [2024-10-06 11:33:58.640683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.640702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 [2024-10-06 11:33:58.654670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.654688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 [2024-10-06 11:33:58.669338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.669355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 [2024-10-06 11:33:58.685116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.685136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 [2024-10-06 11:33:58.696631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.696649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 [2024-10-06 11:33:58.711335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.711353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 [2024-10-06 11:33:58.725914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.725931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 [2024-10-06 11:33:58.741318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.741336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.190 [2024-10-06 11:33:58.757392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.190 [2024-10-06 11:33:58.757411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:58.769484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:58.769502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:58.781679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:58.781696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:58.793456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:58.793478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:58.807022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:58.807040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:58.821489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:58.821506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:58.832999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:58.833016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:58.846368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:58.846385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:58.861080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:58.861098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:58.874611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:58.874629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:58.889198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:58.889216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:58.900499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:58.900517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:58.914819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:58.914837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:58.929551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:58.929569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:58.945404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:58.945422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:58.956532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:58.956550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:58.969682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:58.969699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:58.981015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:58.981033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:58.994257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:58.994275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:59.009265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:59.009283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.450 [2024-10-06 11:33:59.022236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.450 [2024-10-06 11:33:59.022254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.709 [2024-10-06 11:33:59.037510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.709 [2024-10-06 11:33:59.037528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.710 [2024-10-06 11:33:59.053216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.710 [2024-10-06 11:33:59.053239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.710 [2024-10-06 11:33:59.064594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.710 [2024-10-06 11:33:59.064612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.710 [2024-10-06 11:33:59.079077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.710 [2024-10-06 11:33:59.079111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.710 [2024-10-06 11:33:59.093201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.710 [2024-10-06 11:33:59.093218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.710 [2024-10-06 11:33:59.103682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.710 [2024-10-06 11:33:59.103700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.710 [2024-10-06 11:33:59.118243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.710 [2024-10-06 11:33:59.118260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.710 [2024-10-06 11:33:59.133793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.710 [2024-10-06 11:33:59.133810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.710 [2024-10-06 11:33:59.149270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.710 [2024-10-06 11:33:59.149295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.710 [2024-10-06 11:33:59.160958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.710 [2024-10-06 11:33:59.160976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.710 [2024-10-06 11:33:59.174320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.710 [2024-10-06 11:33:59.174338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.710 [2024-10-06 11:33:59.189214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.710 [2024-10-06 11:33:59.189232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.710 [2024-10-06 11:33:59.200684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.710 [2024-10-06 11:33:59.200702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.710 [2024-10-06 11:33:59.214763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.710 [2024-10-06 11:33:59.214780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.710 [2024-10-06 11:33:59.229110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.710 [2024-10-06 11:33:59.229129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.710 [2024-10-06 11:33:59.241148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.710 [2024-10-06 11:33:59.241166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.710 [2024-10-06 11:33:59.254615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.710 [2024-10-06 11:33:59.254632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.710 [2024-10-06 11:33:59.268920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.710 [2024-10-06 11:33:59.268938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.710 [2024-10-06 11:33:59.281881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.710 [2024-10-06 11:33:59.281898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.297393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.297412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.309120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.309138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.319632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.319651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.334775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.334793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.349174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.349192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.361122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.361140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.374347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.374365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.389265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.389284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.400672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.400691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.415069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.415087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.429761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.429779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.444337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.444355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.459041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.459067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.473425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.473442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.488783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.488800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.502177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.502195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.516705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.516722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.529476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.529493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.970 [2024-10-06 11:33:59.541478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.970 [2024-10-06 11:33:59.541495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.230 [2024-10-06 11:33:59.556818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.230 [2024-10-06 11:33:59.556836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.230 [2024-10-06 11:33:59.569781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.230 [2024-10-06 11:33:59.569799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.230 16525.75 IOPS, 129.11 MiB/s [2024-10-06 11:33:59.581755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.230 [2024-10-06 11:33:59.581772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.230 [2024-10-06 11:33:59.597575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.230 [2024-10-06 11:33:59.597593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.230 [2024-10-06 11:33:59.613490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.230 [2024-10-06 11:33:59.613507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.230 [2024-10-06 11:33:59.629468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.230 [2024-10-06 11:33:59.629485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.230 [2024-10-06 11:33:59.644931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.230 [2024-10-06 11:33:59.644949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.230 [2024-10-06 11:33:59.657560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.230 [2024-10-06 11:33:59.657578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.230 [2024-10-06 11:33:59.669993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.230 [2024-10-06 11:33:59.670011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.230 [2024-10-06 11:33:59.681574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.230 [2024-10-06 11:33:59.681591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.230 [2024-10-06 11:33:59.694150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.230 [2024-10-06 11:33:59.694169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.230 [2024-10-06 11:33:59.708966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.230 [2024-10-06 11:33:59.708984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.230 [2024-10-06 11:33:59.722494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.230 [2024-10-06 11:33:59.722513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.230 [2024-10-06 11:33:59.737849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.230 [2024-10-06 11:33:59.737866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.230 [2024-10-06 11:33:59.753083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.230 [2024-10-06 11:33:59.753117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.230 [2024-10-06 11:33:59.764483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.230 [2024-10-06 11:33:59.764503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.230 [2024-10-06 11:33:59.778116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.230 [2024-10-06 11:33:59.778134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.230 [2024-10-06 11:33:59.792677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.230 [2024-10-06 11:33:59.792696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:33:59.804863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:33:59.804882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:33:59.818823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:33:59.818840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:33:59.833359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:33:59.833377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:33:59.845096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:33:59.845114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:33:59.858170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:33:59.858188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:33:59.872784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:33:59.872801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:33:59.884598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:33:59.884616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:33:59.898972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:33:59.898990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:33:59.913570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:33:59.913587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:33:59.926211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:33:59.926229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:33:59.941460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:33:59.941477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:33:59.956918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:33:59.956936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:33:59.968986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:33:59.969004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:33:59.981183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:33:59.981200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:33:59.994610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:33:59.994629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:34:00.011488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:34:00.011509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:34:00.026676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:34:00.026705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:34:00.041945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:34:00.041965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.490 [2024-10-06 11:34:00.052867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.490 [2024-10-06 11:34:00.052886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.750 [2024-10-06 11:34:00.066432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.750 [2024-10-06 11:34:00.066457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.750 [2024-10-06 11:34:00.075868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.750 [2024-10-06 11:34:00.075895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.750 [2024-10-06 11:34:00.090326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.750 [2024-10-06 11:34:00.090345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.750 [2024-10-06 11:34:00.101645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.750 [2024-10-06 11:34:00.101663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.750 [2024-10-06 11:34:00.114024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.750 [2024-10-06 11:34:00.114042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.750 [2024-10-06 11:34:00.125669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.750 [2024-10-06 11:34:00.125687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.750 [2024-10-06 11:34:00.136895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.750 [2024-10-06 11:34:00.136914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.750 [2024-10-06 11:34:00.151448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.750 [2024-10-06 11:34:00.151467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.750 [2024-10-06 11:34:00.158978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.750 [2024-10-06 11:34:00.158995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.750 [2024-10-06 11:34:00.168804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.751 [2024-10-06 11:34:00.168823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.751 [2024-10-06 11:34:00.182138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.751 [2024-10-06 11:34:00.182157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.751 [2024-10-06 11:34:00.192744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.751 [2024-10-06 11:34:00.192762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.751 [2024-10-06 11:34:00.206959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.751 [2024-10-06 11:34:00.206978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.751 [2024-10-06 11:34:00.216995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.751 [2024-10-06 11:34:00.217013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.751 [2024-10-06 11:34:00.229751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.751 [2024-10-06 11:34:00.229769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.751 [2024-10-06 11:34:00.241259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.751 [2024-10-06 11:34:00.241277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.751 [2024-10-06 11:34:00.248189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.751 [2024-10-06 11:34:00.248207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.751 [2024-10-06 11:34:00.256542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.751 [2024-10-06 11:34:00.256561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.751 [2024-10-06 11:34:00.268824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.751 [2024-10-06 11:34:00.268842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.751 [2024-10-06 11:34:00.282142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.751 [2024-10-06 11:34:00.282160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.751 [2024-10-06 11:34:00.293083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.751 [2024-10-06 11:34:00.293106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.751 [2024-10-06 11:34:00.306972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.751 [2024-10-06 11:34:00.306990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.751 [2024-10-06 11:34:00.316227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.751 [2024-10-06 11:34:00.316245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.330816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.330835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.339775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.339793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.354628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.354646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.363830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.363848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.378779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.378797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.387685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.387702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.394472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.394490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.404966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.404984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.417170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.417189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.424224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.424242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.432273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.432290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.445472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.445490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.457235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.457253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.464176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.464193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.478763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.478781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.488419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.488437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.502314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.502337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.511497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.511515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.518259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.518276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.528959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.528977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.540713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.540731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.555374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.555392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.564011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.564029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 [2024-10-06 11:34:00.577971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.577990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.011 16512.00 IOPS, 129.00 MiB/s [2024-10-06 11:34:00.585187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.011 [2024-10-06 11:34:00.585203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 00:39:03.271 Latency(us) 00:39:03.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:03.271 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:03.271 Nvme1n1 : 5.01 16514.30 129.02 0.00 0.00 7744.00 2153.33 14542.75 00:39:03.271 =================================================================================================================== 00:39:03.271 Total : 16514.30 129.02 0.00 0.00 7744.00 2153.33 14542.75 00:39:03.271 [2024-10-06 11:34:00.593180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.593194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.601181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.601194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.609186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.609200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.617191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.617208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.625181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.625192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.633183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.633193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.641180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.641191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.649177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.649187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.657183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.657196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.665178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.665190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.673177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.673187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.681177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.681188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.689176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.689186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.697175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.697184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.705182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.705193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.713179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.713188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.721175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.721184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.729175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.729186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.737176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.737187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.745177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.745187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 [2024-10-06 11:34:00.753175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.271 [2024-10-06 11:34:00.753184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2328089) - No such process 00:39:03.271 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2328089 00:39:03.271 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:03.271 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.271 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:03.271 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.271 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:03.271 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.271 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:03.271 delay0 00:39:03.271 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.271 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:03.271 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.271 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:03.271 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.271 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:03.531 [2024-10-06 11:34:00.917153] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:10.099 Initializing NVMe Controllers 00:39:10.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:10.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:10.099 Initialization complete. Launching workers. 00:39:10.099 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1075 00:39:10.099 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1328, failed to submit 67 00:39:10.099 success 1180, unsuccessful 148, failed 0 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:10.099 rmmod nvme_tcp 00:39:10.099 rmmod nvme_fabrics 00:39:10.099 rmmod nvme_keyring 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 2326405 ']' 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 2326405 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2326405 ']' 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2326405 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2326405 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2326405' 00:39:10.099 killing process with pid 2326405 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2326405 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2326405 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:10.099 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:39:10.100 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:10.100 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:10.100 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:10.100 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:10.100 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:12.636 00:39:12.636 real 0m30.996s 00:39:12.636 user 0m40.321s 00:39:12.636 sys 0m12.326s 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:12.636 ************************************ 00:39:12.636 END TEST nvmf_zcopy 00:39:12.636 ************************************ 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:12.636 ************************************ 00:39:12.636 START TEST nvmf_nmic 00:39:12.636 ************************************ 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:12.636 * Looking for test storage... 00:39:12.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:12.636 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:12.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.637 --rc genhtml_branch_coverage=1 00:39:12.637 --rc genhtml_function_coverage=1 00:39:12.637 --rc genhtml_legend=1 00:39:12.637 --rc geninfo_all_blocks=1 00:39:12.637 --rc geninfo_unexecuted_blocks=1 00:39:12.637 00:39:12.637 ' 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:12.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.637 --rc genhtml_branch_coverage=1 00:39:12.637 --rc genhtml_function_coverage=1 00:39:12.637 --rc genhtml_legend=1 00:39:12.637 --rc geninfo_all_blocks=1 00:39:12.637 --rc geninfo_unexecuted_blocks=1 00:39:12.637 00:39:12.637 ' 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:12.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.637 --rc genhtml_branch_coverage=1 00:39:12.637 --rc genhtml_function_coverage=1 00:39:12.637 --rc genhtml_legend=1 00:39:12.637 --rc geninfo_all_blocks=1 00:39:12.637 --rc geninfo_unexecuted_blocks=1 00:39:12.637 00:39:12.637 ' 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:12.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.637 --rc genhtml_branch_coverage=1 00:39:12.637 --rc genhtml_function_coverage=1 00:39:12.637 --rc genhtml_legend=1 00:39:12.637 --rc geninfo_all_blocks=1 00:39:12.637 --rc geninfo_unexecuted_blocks=1 00:39:12.637 00:39:12.637 ' 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:12.637 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:12.638 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:17.919 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:17.919 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:17.919 Found net devices under 0000:af:00.0: cvl_0_0 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:17.919 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:17.920 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:17.920 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:17.920 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:17.920 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:17.920 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:17.920 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:17.920 Found net devices under 0000:af:00.1: cvl_0_1 00:39:17.920 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:17.920 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:17.920 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:39:17.920 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:17.920 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:17.920 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:17.920 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:17.920 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:17.920 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:17.920 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:17.920 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:17.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:17.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:39:17.920 00:39:17.920 --- 10.0.0.2 ping statistics --- 00:39:17.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:17.920 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:17.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:17.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:39:17.920 00:39:17.920 --- 10.0.0.1 ping statistics --- 00:39:17.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:17.920 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=2333335 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 2333335 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2333335 ']' 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:17.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:17.920 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:17.920 [2024-10-06 11:34:15.300661] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:17.920 [2024-10-06 11:34:15.301556] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:39:17.920 [2024-10-06 11:34:15.301594] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:17.920 [2024-10-06 11:34:15.358659] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:17.920 [2024-10-06 11:34:15.399256] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:17.920 [2024-10-06 11:34:15.399297] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:17.920 [2024-10-06 11:34:15.399304] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:17.920 [2024-10-06 11:34:15.399310] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:17.920 [2024-10-06 11:34:15.399315] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:17.920 [2024-10-06 11:34:15.400642] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:39:17.920 [2024-10-06 11:34:15.400740] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:39:17.920 [2024-10-06 11:34:15.400830] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:39:17.920 [2024-10-06 11:34:15.400831] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:39:17.920 [2024-10-06 11:34:15.469855] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:17.920 [2024-10-06 11:34:15.469950] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:17.920 [2024-10-06 11:34:15.470206] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:17.920 [2024-10-06 11:34:15.470514] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:17.920 [2024-10-06 11:34:15.470781] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:18.240 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:18.240 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:39:18.240 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:18.240 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:18.240 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.240 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:18.240 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.241 [2024-10-06 11:34:15.537547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.241 Malloc0 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.241 [2024-10-06 11:34:15.597503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:18.241 test case1: single bdev can't be used in multiple subsystems 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.241 [2024-10-06 11:34:15.629232] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:18.241 [2024-10-06 11:34:15.629251] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:18.241 [2024-10-06 11:34:15.629259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.241 request: 00:39:18.241 { 00:39:18.241 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:18.241 "namespace": { 00:39:18.241 "bdev_name": "Malloc0", 00:39:18.241 "no_auto_visible": false 00:39:18.241 }, 00:39:18.241 "method": "nvmf_subsystem_add_ns", 00:39:18.241 "req_id": 1 00:39:18.241 } 00:39:18.241 Got JSON-RPC error response 00:39:18.241 response: 00:39:18.241 { 00:39:18.241 "code": -32602, 00:39:18.241 "message": "Invalid parameters" 00:39:18.241 } 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:18.241 Adding namespace failed - expected result. 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:18.241 test case2: host connect to nvmf target in multiple paths 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.241 [2024-10-06 11:34:15.641332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.241 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:18.541 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:18.848 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:18.848 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:39:18.848 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:18.848 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:39:18.848 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:39:20.755 11:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:20.755 11:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:20.755 11:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:20.755 11:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:39:20.755 11:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:20.755 11:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:39:20.755 11:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:20.755 [global] 00:39:20.755 thread=1 00:39:20.755 invalidate=1 00:39:20.755 rw=write 00:39:20.755 time_based=1 00:39:20.755 runtime=1 00:39:20.755 ioengine=libaio 00:39:20.755 direct=1 00:39:20.755 bs=4096 00:39:20.755 iodepth=1 00:39:20.755 norandommap=0 00:39:20.755 numjobs=1 00:39:20.755 00:39:20.755 verify_dump=1 00:39:20.755 verify_backlog=512 00:39:20.755 verify_state_save=0 00:39:20.755 do_verify=1 00:39:20.755 verify=crc32c-intel 00:39:20.755 [job0] 00:39:20.755 filename=/dev/nvme0n1 00:39:20.755 Could not set queue depth (nvme0n1) 00:39:21.015 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:21.015 fio-3.35 00:39:21.015 Starting 1 thread 00:39:22.395 00:39:22.396 job0: (groupid=0, jobs=1): err= 0: pid=2333980: Sun Oct 6 11:34:19 2024 00:39:22.396 read: IOPS=21, BW=87.9KiB/s (90.0kB/s)(88.0KiB/1001msec) 00:39:22.396 slat (nsec): min=9435, max=24056, avg=22423.86, stdev=2964.99 00:39:22.396 clat (usec): min=40645, max=41017, avg=40951.97, stdev=72.29 00:39:22.396 lat (usec): min=40654, max=41040, avg=40974.39, stdev=75.09 00:39:22.396 clat percentiles (usec): 00:39:22.396 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:22.396 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:22.396 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:22.396 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:22.396 | 99.99th=[41157] 00:39:22.396 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:39:22.396 slat (nsec): min=10000, max=38253, avg=11287.09, stdev=2029.29 00:39:22.396 clat (usec): min=166, max=382, avg=179.78, stdev=11.30 00:39:22.396 lat (usec): min=177, max=420, avg=191.07, stdev=12.48 00:39:22.396 clat percentiles (usec): 00:39:22.396 | 1.00th=[ 169], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 176], 00:39:22.396 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 180], 00:39:22.396 | 70.00th=[ 182], 80.00th=[ 184], 90.00th=[ 186], 95.00th=[ 190], 00:39:22.396 | 99.00th=[ 204], 99.50th=[ 219], 99.90th=[ 383], 99.95th=[ 383], 00:39:22.396 | 99.99th=[ 383] 00:39:22.396 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:22.396 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:22.396 lat (usec) : 250=95.51%, 500=0.37% 00:39:22.396 lat (msec) : 50=4.12% 00:39:22.396 cpu : usr=0.60%, sys=0.70%, ctx=534, majf=0, minf=1 00:39:22.396 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:22.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.396 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:22.396 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:22.396 00:39:22.396 Run status group 0 (all jobs): 00:39:22.396 READ: bw=87.9KiB/s (90.0kB/s), 87.9KiB/s-87.9KiB/s (90.0kB/s-90.0kB/s), io=88.0KiB (90.1kB), run=1001-1001msec 00:39:22.396 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:39:22.396 00:39:22.396 Disk stats (read/write): 00:39:22.396 nvme0n1: ios=69/512, merge=0/0, ticks=806/83, in_queue=889, util=91.38% 00:39:22.396 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:22.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:22.396 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:22.396 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:39:22.396 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:22.396 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:22.396 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:22.396 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:22.396 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:39:22.396 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:22.396 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:22.396 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:22.396 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:22.396 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:22.396 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:22.396 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:22.396 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:22.396 rmmod nvme_tcp 00:39:22.396 rmmod nvme_fabrics 00:39:22.396 rmmod nvme_keyring 00:39:22.655 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:22.655 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:22.655 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:22.655 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 2333335 ']' 00:39:22.655 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 2333335 00:39:22.655 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2333335 ']' 00:39:22.655 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2333335 00:39:22.655 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:39:22.655 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:22.655 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2333335 00:39:22.655 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:22.655 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:22.655 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2333335' 00:39:22.655 killing process with pid 2333335 00:39:22.655 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2333335 00:39:22.655 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2333335 00:39:22.914 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:22.914 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:22.914 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:22.914 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:22.914 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:39:22.914 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:39:22.914 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:22.914 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:22.914 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:22.914 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:22.914 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:22.914 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:24.824 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:24.824 00:39:24.824 real 0m12.660s 00:39:24.824 user 0m24.382s 00:39:24.824 sys 0m5.709s 00:39:24.824 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:24.824 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:24.824 ************************************ 00:39:24.824 END TEST nvmf_nmic 00:39:24.824 ************************************ 00:39:24.824 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:24.824 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:24.824 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:24.824 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:24.824 ************************************ 00:39:24.824 START TEST nvmf_fio_target 00:39:24.824 ************************************ 00:39:24.824 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:25.085 * Looking for test storage... 00:39:25.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:25.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.085 --rc genhtml_branch_coverage=1 00:39:25.085 --rc genhtml_function_coverage=1 00:39:25.085 --rc genhtml_legend=1 00:39:25.085 --rc geninfo_all_blocks=1 00:39:25.085 --rc geninfo_unexecuted_blocks=1 00:39:25.085 00:39:25.085 ' 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:25.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.085 --rc genhtml_branch_coverage=1 00:39:25.085 --rc genhtml_function_coverage=1 00:39:25.085 --rc genhtml_legend=1 00:39:25.085 --rc geninfo_all_blocks=1 00:39:25.085 --rc geninfo_unexecuted_blocks=1 00:39:25.085 00:39:25.085 ' 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:25.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.085 --rc genhtml_branch_coverage=1 00:39:25.085 --rc genhtml_function_coverage=1 00:39:25.085 --rc genhtml_legend=1 00:39:25.085 --rc geninfo_all_blocks=1 00:39:25.085 --rc geninfo_unexecuted_blocks=1 00:39:25.085 00:39:25.085 ' 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:25.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.085 --rc genhtml_branch_coverage=1 00:39:25.085 --rc genhtml_function_coverage=1 00:39:25.085 --rc genhtml_legend=1 00:39:25.085 --rc geninfo_all_blocks=1 00:39:25.085 --rc geninfo_unexecuted_blocks=1 00:39:25.085 00:39:25.085 ' 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:25.085 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:25.086 11:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:30.366 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:30.366 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:30.366 Found net devices under 0000:af:00.0: cvl_0_0 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:30.366 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:30.367 Found net devices under 0000:af:00.1: cvl_0_1 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:30.367 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:30.627 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:30.627 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:30.627 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:30.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:30.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:39:30.627 00:39:30.627 --- 10.0.0.2 ping statistics --- 00:39:30.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:30.627 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:30.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:30.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:39:30.627 00:39:30.627 --- 10.0.0.1 ping statistics --- 00:39:30.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:30.627 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=2337626 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 2337626 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2337626 ']' 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:30.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:30.627 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:30.627 [2024-10-06 11:34:28.098163] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:30.627 [2024-10-06 11:34:28.099074] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:39:30.627 [2024-10-06 11:34:28.099109] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:30.627 [2024-10-06 11:34:28.156115] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:30.627 [2024-10-06 11:34:28.194295] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:30.627 [2024-10-06 11:34:28.194337] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:30.627 [2024-10-06 11:34:28.194345] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:30.627 [2024-10-06 11:34:28.194352] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:30.627 [2024-10-06 11:34:28.194358] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:30.627 [2024-10-06 11:34:28.195838] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:39:30.627 [2024-10-06 11:34:28.195939] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:39:30.627 [2024-10-06 11:34:28.196027] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:39:30.627 [2024-10-06 11:34:28.196028] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:39:30.887 [2024-10-06 11:34:28.266310] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:30.887 [2024-10-06 11:34:28.266507] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:30.887 [2024-10-06 11:34:28.266598] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:30.887 [2024-10-06 11:34:28.266670] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:30.887 [2024-10-06 11:34:28.266884] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:30.887 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:30.887 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:39:30.887 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:30.887 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:30.887 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:30.887 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:30.887 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:31.146 [2024-10-06 11:34:28.516761] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:31.146 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:31.405 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:31.405 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:31.405 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:31.664 11:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:31.664 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:31.664 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:31.924 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:31.924 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:32.184 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:32.444 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:32.444 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:32.445 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:32.445 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:32.703 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:32.703 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:32.962 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:33.222 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:33.222 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:33.481 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:33.481 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:33.481 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:33.740 [2024-10-06 11:34:31.168660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:33.740 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:34.000 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:34.258 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:34.517 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:34.517 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:39:34.517 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:34.517 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:39:34.517 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:39:34.517 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:39:36.423 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:36.423 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:36.423 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:36.423 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:39:36.423 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:36.423 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:39:36.423 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:36.423 [global] 00:39:36.423 thread=1 00:39:36.423 invalidate=1 00:39:36.423 rw=write 00:39:36.423 time_based=1 00:39:36.423 runtime=1 00:39:36.423 ioengine=libaio 00:39:36.423 direct=1 00:39:36.423 bs=4096 00:39:36.423 iodepth=1 00:39:36.423 norandommap=0 00:39:36.423 numjobs=1 00:39:36.423 00:39:36.423 verify_dump=1 00:39:36.423 verify_backlog=512 00:39:36.423 verify_state_save=0 00:39:36.423 do_verify=1 00:39:36.423 verify=crc32c-intel 00:39:36.423 [job0] 00:39:36.423 filename=/dev/nvme0n1 00:39:36.423 [job1] 00:39:36.423 filename=/dev/nvme0n2 00:39:36.423 [job2] 00:39:36.423 filename=/dev/nvme0n3 00:39:36.423 [job3] 00:39:36.423 filename=/dev/nvme0n4 00:39:36.423 Could not set queue depth (nvme0n1) 00:39:36.423 Could not set queue depth (nvme0n2) 00:39:36.423 Could not set queue depth (nvme0n3) 00:39:36.423 Could not set queue depth (nvme0n4) 00:39:36.682 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:36.682 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:36.682 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:36.682 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:36.682 fio-3.35 00:39:36.682 Starting 4 threads 00:39:38.062 00:39:38.062 job0: (groupid=0, jobs=1): err= 0: pid=2338743: Sun Oct 6 11:34:35 2024 00:39:38.062 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2060KiB/1007msec) 00:39:38.062 slat (nsec): min=4794, max=22960, avg=6631.47, stdev=2696.61 00:39:38.062 clat (usec): min=256, max=41355, avg=1525.16, stdev=6847.41 00:39:38.062 lat (usec): min=262, max=41365, avg=1531.79, stdev=6849.85 00:39:38.062 clat percentiles (usec): 00:39:38.062 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 293], 20.00th=[ 306], 00:39:38.062 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 351], 00:39:38.062 | 70.00th=[ 359], 80.00th=[ 375], 90.00th=[ 400], 95.00th=[ 433], 00:39:38.062 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:38.062 | 99.99th=[41157] 00:39:38.062 write: IOPS=1016, BW=4068KiB/s (4165kB/s)(4096KiB/1007msec); 0 zone resets 00:39:38.062 slat (nsec): min=5445, max=71223, avg=9766.61, stdev=2536.10 00:39:38.062 clat (usec): min=158, max=443, avg=199.79, stdev=20.21 00:39:38.062 lat (usec): min=165, max=457, avg=209.55, stdev=20.88 00:39:38.062 clat percentiles (usec): 00:39:38.062 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:39:38.062 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 200], 00:39:38.062 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 233], 00:39:38.062 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 351], 99.95th=[ 445], 00:39:38.062 | 99.99th=[ 445] 00:39:38.062 bw ( KiB/s): min= 4096, max= 4096, per=16.07%, avg=4096.00, stdev= 0.00, samples=2 00:39:38.062 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:39:38.062 lat (usec) : 250=64.78%, 500=34.24% 00:39:38.062 lat (msec) : 50=0.97% 00:39:38.062 cpu : usr=0.50%, sys=1.39%, ctx=1539, majf=0, minf=1 00:39:38.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:38.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.062 issued rwts: total=515,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:38.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:38.062 job1: (groupid=0, jobs=1): err= 0: pid=2338762: Sun Oct 6 11:34:35 2024 00:39:38.062 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:39:38.062 slat (nsec): min=7392, max=44243, avg=8385.18, stdev=1290.80 00:39:38.062 clat (usec): min=342, max=538, avg=386.05, stdev=18.67 00:39:38.063 lat (usec): min=351, max=546, avg=394.44, stdev=18.66 00:39:38.063 clat percentiles (usec): 00:39:38.063 | 1.00th=[ 363], 5.00th=[ 367], 10.00th=[ 371], 20.00th=[ 375], 00:39:38.063 | 30.00th=[ 375], 40.00th=[ 379], 50.00th=[ 383], 60.00th=[ 388], 00:39:38.063 | 70.00th=[ 392], 80.00th=[ 396], 90.00th=[ 404], 95.00th=[ 416], 00:39:38.063 | 99.00th=[ 465], 99.50th=[ 482], 99.90th=[ 537], 99.95th=[ 537], 00:39:38.063 | 99.99th=[ 537] 00:39:38.063 write: IOPS=1590, BW=6362KiB/s (6514kB/s)(6368KiB/1001msec); 0 zone resets 00:39:38.063 slat (nsec): min=10739, max=39090, avg=12195.59, stdev=1693.71 00:39:38.063 clat (usec): min=181, max=627, avg=228.93, stdev=32.05 00:39:38.063 lat (usec): min=192, max=639, avg=241.13, stdev=32.20 00:39:38.063 clat percentiles (usec): 00:39:38.063 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 206], 00:39:38.063 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:39:38.063 | 70.00th=[ 235], 80.00th=[ 253], 90.00th=[ 273], 95.00th=[ 289], 00:39:38.063 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 594], 99.95th=[ 627], 00:39:38.063 | 99.99th=[ 627] 00:39:38.063 bw ( KiB/s): min= 8192, max= 8192, per=32.14%, avg=8192.00, stdev= 0.00, samples=1 00:39:38.063 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:38.063 lat (usec) : 250=39.64%, 500=60.20%, 750=0.16% 00:39:38.063 cpu : usr=2.90%, sys=5.00%, ctx=3133, majf=0, minf=1 00:39:38.063 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:38.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.063 issued rwts: total=1536,1592,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:38.063 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:38.063 job2: (groupid=0, jobs=1): err= 0: pid=2338799: Sun Oct 6 11:34:35 2024 00:39:38.063 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:39:38.063 slat (nsec): min=7320, max=27386, avg=8386.72, stdev=1328.31 00:39:38.063 clat (usec): min=350, max=513, avg=374.00, stdev=12.25 00:39:38.063 lat (usec): min=358, max=523, avg=382.38, stdev=12.32 00:39:38.063 clat percentiles (usec): 00:39:38.063 | 1.00th=[ 355], 5.00th=[ 359], 10.00th=[ 363], 20.00th=[ 367], 00:39:38.063 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 371], 60.00th=[ 375], 00:39:38.063 | 70.00th=[ 379], 80.00th=[ 383], 90.00th=[ 388], 95.00th=[ 396], 00:39:38.063 | 99.00th=[ 408], 99.50th=[ 420], 99.90th=[ 486], 99.95th=[ 515], 00:39:38.063 | 99.99th=[ 515] 00:39:38.063 write: IOPS=1751, BW=7005KiB/s (7173kB/s)(7012KiB/1001msec); 0 zone resets 00:39:38.063 slat (nsec): min=10278, max=45150, avg=11732.63, stdev=1760.71 00:39:38.063 clat (usec): min=191, max=405, avg=217.75, stdev=11.75 00:39:38.063 lat (usec): min=201, max=417, avg=229.48, stdev=12.14 00:39:38.063 clat percentiles (usec): 00:39:38.063 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 210], 00:39:38.063 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 219], 00:39:38.063 | 70.00th=[ 223], 80.00th=[ 225], 90.00th=[ 231], 95.00th=[ 237], 00:39:38.063 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 367], 99.95th=[ 408], 00:39:38.063 | 99.99th=[ 408] 00:39:38.063 bw ( KiB/s): min= 8192, max= 8192, per=32.14%, avg=8192.00, stdev= 0.00, samples=1 00:39:38.063 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:38.063 lat (usec) : 250=52.84%, 500=47.13%, 750=0.03% 00:39:38.063 cpu : usr=3.00%, sys=5.00%, ctx=3289, majf=0, minf=1 00:39:38.063 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:38.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.063 issued rwts: total=1536,1753,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:38.063 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:38.063 job3: (groupid=0, jobs=1): err= 0: pid=2338810: Sun Oct 6 11:34:35 2024 00:39:38.063 read: IOPS=1866, BW=7465KiB/s (7644kB/s)(7472KiB/1001msec) 00:39:38.063 slat (nsec): min=6746, max=27779, avg=7580.94, stdev=892.29 00:39:38.063 clat (usec): min=223, max=514, avg=299.05, stdev=27.32 00:39:38.063 lat (usec): min=231, max=521, avg=306.63, stdev=27.34 00:39:38.063 clat percentiles (usec): 00:39:38.063 | 1.00th=[ 245], 5.00th=[ 262], 10.00th=[ 273], 20.00th=[ 281], 00:39:38.063 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 302], 00:39:38.063 | 70.00th=[ 306], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 347], 00:39:38.063 | 99.00th=[ 420], 99.50th=[ 453], 99.90th=[ 515], 99.95th=[ 515], 00:39:38.063 | 99.99th=[ 515] 00:39:38.063 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:39:38.063 slat (nsec): min=9513, max=42130, avg=10938.02, stdev=2386.50 00:39:38.063 clat (usec): min=149, max=400, avg=193.27, stdev=21.66 00:39:38.063 lat (usec): min=159, max=438, avg=204.21, stdev=22.40 00:39:38.063 clat percentiles (usec): 00:39:38.063 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 176], 00:39:38.063 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 196], 00:39:38.063 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 227], 00:39:38.063 | 99.00th=[ 255], 99.50th=[ 277], 99.90th=[ 314], 99.95th=[ 330], 00:39:38.063 | 99.99th=[ 400] 00:39:38.063 bw ( KiB/s): min= 8192, max= 8192, per=32.14%, avg=8192.00, stdev= 0.00, samples=1 00:39:38.063 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:38.063 lat (usec) : 250=52.40%, 500=47.55%, 750=0.05% 00:39:38.063 cpu : usr=1.80%, sys=4.10%, ctx=3916, majf=0, minf=1 00:39:38.063 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:38.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.063 issued rwts: total=1868,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:38.063 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:38.063 00:39:38.063 Run status group 0 (all jobs): 00:39:38.063 READ: bw=21.2MiB/s (22.2MB/s), 2046KiB/s-7465KiB/s (2095kB/s-7644kB/s), io=21.3MiB (22.3MB), run=1001-1007msec 00:39:38.063 WRITE: bw=24.9MiB/s (26.1MB/s), 4068KiB/s-8184KiB/s (4165kB/s-8380kB/s), io=25.1MiB (26.3MB), run=1001-1007msec 00:39:38.063 00:39:38.063 Disk stats (read/write): 00:39:38.063 nvme0n1: ios=561/641, merge=0/0, ticks=692/128, in_queue=820, util=82.35% 00:39:38.063 nvme0n2: ios=1102/1536, merge=0/0, ticks=1393/328, in_queue=1721, util=97.43% 00:39:38.063 nvme0n3: ios=1156/1536, merge=0/0, ticks=415/320, in_queue=735, util=87.60% 00:39:38.063 nvme0n4: ios=1536/1591, merge=0/0, ticks=446/296, in_queue=742, util=89.25% 00:39:38.063 11:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:38.063 [global] 00:39:38.063 thread=1 00:39:38.063 invalidate=1 00:39:38.063 rw=randwrite 00:39:38.063 time_based=1 00:39:38.063 runtime=1 00:39:38.063 ioengine=libaio 00:39:38.063 direct=1 00:39:38.063 bs=4096 00:39:38.063 iodepth=1 00:39:38.063 norandommap=0 00:39:38.063 numjobs=1 00:39:38.063 00:39:38.063 verify_dump=1 00:39:38.063 verify_backlog=512 00:39:38.063 verify_state_save=0 00:39:38.063 do_verify=1 00:39:38.063 verify=crc32c-intel 00:39:38.063 [job0] 00:39:38.063 filename=/dev/nvme0n1 00:39:38.063 [job1] 00:39:38.063 filename=/dev/nvme0n2 00:39:38.063 [job2] 00:39:38.063 filename=/dev/nvme0n3 00:39:38.063 [job3] 00:39:38.064 filename=/dev/nvme0n4 00:39:38.064 Could not set queue depth (nvme0n1) 00:39:38.064 Could not set queue depth (nvme0n2) 00:39:38.064 Could not set queue depth (nvme0n3) 00:39:38.064 Could not set queue depth (nvme0n4) 00:39:38.323 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:38.323 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:38.323 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:38.323 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:38.323 fio-3.35 00:39:38.323 Starting 4 threads 00:39:39.701 00:39:39.701 job0: (groupid=0, jobs=1): err= 0: pid=2339178: Sun Oct 6 11:34:37 2024 00:39:39.701 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:39:39.701 slat (nsec): min=6703, max=22667, avg=8252.84, stdev=1181.30 00:39:39.701 clat (usec): min=325, max=527, avg=367.37, stdev=36.07 00:39:39.701 lat (usec): min=332, max=536, avg=375.62, stdev=36.38 00:39:39.701 clat percentiles (usec): 00:39:39.701 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 347], 00:39:39.701 | 30.00th=[ 351], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 359], 00:39:39.701 | 70.00th=[ 371], 80.00th=[ 379], 90.00th=[ 404], 95.00th=[ 478], 00:39:39.701 | 99.00th=[ 498], 99.50th=[ 502], 99.90th=[ 529], 99.95th=[ 529], 00:39:39.701 | 99.99th=[ 529] 00:39:39.701 write: IOPS=1877, BW=7508KiB/s (7689kB/s)(7516KiB/1001msec); 0 zone resets 00:39:39.701 slat (nsec): min=9408, max=37698, avg=11319.22, stdev=1398.03 00:39:39.701 clat (usec): min=179, max=347, avg=208.59, stdev=15.93 00:39:39.701 lat (usec): min=190, max=385, avg=219.90, stdev=16.24 00:39:39.701 clat percentiles (usec): 00:39:39.701 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 198], 00:39:39.701 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 206], 00:39:39.701 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 239], 95.00th=[ 243], 00:39:39.701 | 99.00th=[ 253], 99.50th=[ 258], 99.90th=[ 285], 99.95th=[ 347], 00:39:39.701 | 99.99th=[ 347] 00:39:39.701 bw ( KiB/s): min= 8192, max= 8192, per=35.55%, avg=8192.00, stdev= 0.00, samples=1 00:39:39.701 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:39.701 lat (usec) : 250=54.17%, 500=45.59%, 750=0.23% 00:39:39.701 cpu : usr=3.20%, sys=4.50%, ctx=3417, majf=0, minf=1 00:39:39.701 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:39.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.701 issued rwts: total=1536,1879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.701 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:39.701 job1: (groupid=0, jobs=1): err= 0: pid=2339186: Sun Oct 6 11:34:37 2024 00:39:39.701 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:39:39.701 slat (nsec): min=6443, max=23417, avg=7642.77, stdev=1473.77 00:39:39.701 clat (usec): min=296, max=41017, avg=680.05, stdev=3572.93 00:39:39.701 lat (usec): min=303, max=41033, avg=687.70, stdev=3573.80 00:39:39.701 clat percentiles (usec): 00:39:39.701 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 314], 20.00th=[ 322], 00:39:39.701 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 338], 00:39:39.701 | 70.00th=[ 359], 80.00th=[ 433], 90.00th=[ 449], 95.00th=[ 502], 00:39:39.701 | 99.00th=[ 519], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:39.701 | 99.99th=[41157] 00:39:39.701 write: IOPS=1300, BW=5203KiB/s (5328kB/s)(5208KiB/1001msec); 0 zone resets 00:39:39.701 slat (nsec): min=9206, max=63355, avg=10585.23, stdev=1933.98 00:39:39.701 clat (usec): min=174, max=471, avg=212.46, stdev=18.54 00:39:39.701 lat (usec): min=184, max=534, avg=223.05, stdev=19.40 00:39:39.701 clat percentiles (usec): 00:39:39.701 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:39:39.701 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 212], 00:39:39.701 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 245], 00:39:39.701 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 326], 99.95th=[ 474], 00:39:39.701 | 99.99th=[ 474] 00:39:39.701 bw ( KiB/s): min= 4096, max= 4096, per=17.77%, avg=4096.00, stdev= 0.00, samples=1 00:39:39.701 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:39.701 lat (usec) : 250=53.87%, 500=43.72%, 750=2.02%, 1000=0.04% 00:39:39.701 lat (msec) : 50=0.34% 00:39:39.701 cpu : usr=1.20%, sys=2.20%, ctx=2327, majf=0, minf=2 00:39:39.701 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:39.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.701 issued rwts: total=1024,1302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.701 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:39.701 job2: (groupid=0, jobs=1): err= 0: pid=2339197: Sun Oct 6 11:34:37 2024 00:39:39.701 read: IOPS=997, BW=3988KiB/s (4084kB/s)(4132KiB/1036msec) 00:39:39.701 slat (nsec): min=6842, max=30574, avg=9361.87, stdev=2126.74 00:39:39.701 clat (usec): min=276, max=41056, avg=656.67, stdev=3779.48 00:39:39.701 lat (usec): min=285, max=41068, avg=666.03, stdev=3779.67 00:39:39.701 clat percentiles (usec): 00:39:39.701 | 1.00th=[ 285], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 293], 00:39:39.701 | 30.00th=[ 297], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:39:39.701 | 70.00th=[ 306], 80.00th=[ 310], 90.00th=[ 314], 95.00th=[ 322], 00:39:39.701 | 99.00th=[ 461], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:39.701 | 99.99th=[41157] 00:39:39.701 write: IOPS=1482, BW=5931KiB/s (6073kB/s)(6144KiB/1036msec); 0 zone resets 00:39:39.701 slat (nsec): min=9365, max=63722, avg=12098.01, stdev=2627.17 00:39:39.701 clat (usec): min=174, max=473, avg=209.34, stdev=17.81 00:39:39.701 lat (usec): min=190, max=537, avg=221.43, stdev=18.17 00:39:39.701 clat percentiles (usec): 00:39:39.701 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:39:39.701 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 208], 00:39:39.701 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 239], 00:39:39.701 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 408], 99.95th=[ 474], 00:39:39.701 | 99.99th=[ 474] 00:39:39.701 bw ( KiB/s): min= 4096, max= 8192, per=26.66%, avg=6144.00, stdev=2896.31, samples=2 00:39:39.701 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:39:39.701 lat (usec) : 250=58.19%, 500=41.42%, 750=0.04% 00:39:39.701 lat (msec) : 50=0.35% 00:39:39.701 cpu : usr=1.45%, sys=2.90%, ctx=2571, majf=0, minf=2 00:39:39.701 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:39.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.701 issued rwts: total=1033,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.701 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:39.701 job3: (groupid=0, jobs=1): err= 0: pid=2339199: Sun Oct 6 11:34:37 2024 00:39:39.701 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:39:39.701 slat (nsec): min=7852, max=32337, avg=8755.60, stdev=1388.15 00:39:39.701 clat (usec): min=340, max=41159, avg=686.32, stdev=3550.95 00:39:39.701 lat (usec): min=348, max=41173, avg=695.07, stdev=3551.33 00:39:39.701 clat percentiles (usec): 00:39:39.701 | 1.00th=[ 351], 5.00th=[ 355], 10.00th=[ 355], 20.00th=[ 359], 00:39:39.702 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 371], 60.00th=[ 371], 00:39:39.702 | 70.00th=[ 375], 80.00th=[ 379], 90.00th=[ 388], 95.00th=[ 396], 00:39:39.702 | 99.00th=[ 461], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:39.702 | 99.99th=[41157] 00:39:39.702 write: IOPS=1250, BW=5003KiB/s (5123kB/s)(5008KiB/1001msec); 0 zone resets 00:39:39.702 slat (nsec): min=9528, max=64712, avg=11481.82, stdev=2352.70 00:39:39.702 clat (usec): min=180, max=369, avg=213.74, stdev=15.18 00:39:39.702 lat (usec): min=192, max=434, avg=225.23, stdev=15.62 00:39:39.702 clat percentiles (usec): 00:39:39.702 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 202], 00:39:39.702 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:39:39.702 | 70.00th=[ 221], 80.00th=[ 225], 90.00th=[ 233], 95.00th=[ 241], 00:39:39.702 | 99.00th=[ 253], 99.50th=[ 258], 99.90th=[ 359], 99.95th=[ 371], 00:39:39.702 | 99.99th=[ 371] 00:39:39.702 bw ( KiB/s): min= 4096, max= 4096, per=17.77%, avg=4096.00, stdev= 0.00, samples=1 00:39:39.702 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:39.702 lat (usec) : 250=54.22%, 500=45.39%, 750=0.04% 00:39:39.702 lat (msec) : 50=0.35% 00:39:39.702 cpu : usr=0.80%, sys=3.00%, ctx=2277, majf=0, minf=2 00:39:39.702 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:39.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.702 issued rwts: total=1024,1252,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.702 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:39.702 00:39:39.702 Run status group 0 (all jobs): 00:39:39.702 READ: bw=17.4MiB/s (18.3MB/s), 3988KiB/s-6138KiB/s (4084kB/s-6285kB/s), io=18.0MiB (18.9MB), run=1001-1036msec 00:39:39.702 WRITE: bw=22.5MiB/s (23.6MB/s), 5003KiB/s-7508KiB/s (5123kB/s-7689kB/s), io=23.3MiB (24.4MB), run=1001-1036msec 00:39:39.702 00:39:39.702 Disk stats (read/write): 00:39:39.702 nvme0n1: ios=1374/1536, merge=0/0, ticks=951/304, in_queue=1255, util=96.59% 00:39:39.702 nvme0n2: ios=831/1024, merge=0/0, ticks=622/219, in_queue=841, util=87.20% 00:39:39.702 nvme0n3: ios=1078/1536, merge=0/0, ticks=545/308, in_queue=853, util=91.25% 00:39:39.702 nvme0n4: ios=773/1024, merge=0/0, ticks=606/207, in_queue=813, util=89.71% 00:39:39.702 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:39.702 [global] 00:39:39.702 thread=1 00:39:39.702 invalidate=1 00:39:39.702 rw=write 00:39:39.702 time_based=1 00:39:39.702 runtime=1 00:39:39.702 ioengine=libaio 00:39:39.702 direct=1 00:39:39.702 bs=4096 00:39:39.702 iodepth=128 00:39:39.702 norandommap=0 00:39:39.702 numjobs=1 00:39:39.702 00:39:39.702 verify_dump=1 00:39:39.702 verify_backlog=512 00:39:39.702 verify_state_save=0 00:39:39.702 do_verify=1 00:39:39.702 verify=crc32c-intel 00:39:39.702 [job0] 00:39:39.702 filename=/dev/nvme0n1 00:39:39.702 [job1] 00:39:39.702 filename=/dev/nvme0n2 00:39:39.702 [job2] 00:39:39.702 filename=/dev/nvme0n3 00:39:39.702 [job3] 00:39:39.702 filename=/dev/nvme0n4 00:39:39.702 Could not set queue depth (nvme0n1) 00:39:39.702 Could not set queue depth (nvme0n2) 00:39:39.702 Could not set queue depth (nvme0n3) 00:39:39.702 Could not set queue depth (nvme0n4) 00:39:39.961 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:39.961 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:39.961 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:39.961 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:39.961 fio-3.35 00:39:39.961 Starting 4 threads 00:39:41.364 00:39:41.364 job0: (groupid=0, jobs=1): err= 0: pid=2339599: Sun Oct 6 11:34:38 2024 00:39:41.364 read: IOPS=3581, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1005msec) 00:39:41.364 slat (nsec): min=1447, max=11981k, avg=108836.97, stdev=683260.61 00:39:41.364 clat (usec): min=2119, max=54373, avg=15261.14, stdev=7674.77 00:39:41.364 lat (usec): min=2126, max=54401, avg=15369.98, stdev=7734.44 00:39:41.364 clat percentiles (usec): 00:39:41.364 | 1.00th=[ 3752], 5.00th=[ 7963], 10.00th=[ 9503], 20.00th=[10814], 00:39:41.364 | 30.00th=[11207], 40.00th=[11863], 50.00th=[13435], 60.00th=[14615], 00:39:41.364 | 70.00th=[16188], 80.00th=[17957], 90.00th=[21890], 95.00th=[32637], 00:39:41.364 | 99.00th=[42206], 99.50th=[46924], 99.90th=[46924], 99.95th=[51643], 00:39:41.364 | 99.99th=[54264] 00:39:41.364 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:39:41.364 slat (usec): min=2, max=18351, avg=138.10, stdev=813.20 00:39:41.364 clat (usec): min=5338, max=75813, avg=17548.54, stdev=11074.66 00:39:41.364 lat (usec): min=5393, max=75818, avg=17686.64, stdev=11138.52 00:39:41.364 clat percentiles (usec): 00:39:41.364 | 1.00th=[ 7177], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10290], 00:39:41.364 | 30.00th=[10814], 40.00th=[11469], 50.00th=[12911], 60.00th=[16909], 00:39:41.364 | 70.00th=[20055], 80.00th=[21627], 90.00th=[30540], 95.00th=[34866], 00:39:41.364 | 99.00th=[72877], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:39:41.364 | 99.99th=[76022] 00:39:41.364 bw ( KiB/s): min=15488, max=16351, per=21.03%, avg=15919.50, stdev=610.23, samples=2 00:39:41.364 iops : min= 3872, max= 4087, avg=3979.50, stdev=152.03, samples=2 00:39:41.364 lat (msec) : 4=0.52%, 10=11.35%, 20=66.48%, 50=20.18%, 100=1.47% 00:39:41.364 cpu : usr=3.19%, sys=3.69%, ctx=526, majf=0, minf=1 00:39:41.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:41.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:41.364 issued rwts: total=3599,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:41.364 job1: (groupid=0, jobs=1): err= 0: pid=2339610: Sun Oct 6 11:34:38 2024 00:39:41.364 read: IOPS=5479, BW=21.4MiB/s (22.4MB/s)(21.5MiB/1003msec) 00:39:41.364 slat (nsec): min=1151, max=24159k, avg=71772.14, stdev=684561.10 00:39:41.364 clat (usec): min=994, max=48767, avg=11723.89, stdev=6267.87 00:39:41.364 lat (usec): min=1743, max=58196, avg=11795.66, stdev=6311.18 00:39:41.364 clat percentiles (usec): 00:39:41.364 | 1.00th=[ 3359], 5.00th=[ 5145], 10.00th=[ 5604], 20.00th=[ 7308], 00:39:41.364 | 30.00th=[ 8225], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[11076], 00:39:41.364 | 70.00th=[12387], 80.00th=[14877], 90.00th=[20055], 95.00th=[24249], 00:39:41.364 | 99.00th=[33817], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:39:41.364 | 99.99th=[49021] 00:39:41.364 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:39:41.364 slat (usec): min=2, max=16150, avg=72.37, stdev=640.18 00:39:41.364 clat (usec): min=925, max=38247, avg=11045.97, stdev=5360.69 00:39:41.364 lat (usec): min=936, max=42390, avg=11118.34, stdev=5405.78 00:39:41.364 clat percentiles (usec): 00:39:41.364 | 1.00th=[ 2704], 5.00th=[ 4359], 10.00th=[ 5669], 20.00th=[ 7111], 00:39:41.364 | 30.00th=[ 8586], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10421], 00:39:41.364 | 70.00th=[12256], 80.00th=[14222], 90.00th=[16712], 95.00th=[22152], 00:39:41.364 | 99.00th=[31327], 99.50th=[32113], 99.90th=[36439], 99.95th=[36439], 00:39:41.364 | 99.99th=[38011] 00:39:41.364 bw ( KiB/s): min=20488, max=24518, per=29.73%, avg=22503.00, stdev=2849.64, samples=2 00:39:41.364 iops : min= 5122, max= 6129, avg=5625.50, stdev=712.06, samples=2 00:39:41.364 lat (usec) : 1000=0.03% 00:39:41.364 lat (msec) : 2=0.15%, 4=2.37%, 10=48.89%, 20=40.35%, 50=8.20% 00:39:41.364 cpu : usr=3.89%, sys=6.39%, ctx=421, majf=0, minf=1 00:39:41.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:39:41.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:41.364 issued rwts: total=5496,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:41.364 job2: (groupid=0, jobs=1): err= 0: pid=2339624: Sun Oct 6 11:34:38 2024 00:39:41.364 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:39:41.364 slat (nsec): min=1395, max=18384k, avg=128243.42, stdev=962959.01 00:39:41.364 clat (usec): min=4754, max=44920, avg=15295.91, stdev=6104.80 00:39:41.364 lat (usec): min=4765, max=44929, avg=15424.16, stdev=6186.25 00:39:41.364 clat percentiles (usec): 00:39:41.364 | 1.00th=[ 8717], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10683], 00:39:41.364 | 30.00th=[11600], 40.00th=[12125], 50.00th=[13960], 60.00th=[14746], 00:39:41.365 | 70.00th=[17957], 80.00th=[19268], 90.00th=[20841], 95.00th=[25560], 00:39:41.365 | 99.00th=[41157], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:39:41.365 | 99.99th=[44827] 00:39:41.365 write: IOPS=4391, BW=17.2MiB/s (18.0MB/s)(17.2MiB/1002msec); 0 zone resets 00:39:41.365 slat (usec): min=2, max=16396, avg=99.42, stdev=675.66 00:39:41.365 clat (usec): min=513, max=44923, avg=14641.35, stdev=7104.34 00:39:41.365 lat (usec): min=1668, max=44932, avg=14740.77, stdev=7143.58 00:39:41.365 clat percentiles (usec): 00:39:41.365 | 1.00th=[ 3687], 5.00th=[ 6325], 10.00th=[ 7832], 20.00th=[ 8979], 00:39:41.365 | 30.00th=[10552], 40.00th=[11469], 50.00th=[13173], 60.00th=[13960], 00:39:41.365 | 70.00th=[15008], 80.00th=[19792], 90.00th=[25297], 95.00th=[30016], 00:39:41.365 | 99.00th=[36963], 99.50th=[36963], 99.90th=[36963], 99.95th=[41157], 00:39:41.365 | 99.99th=[44827] 00:39:41.365 bw ( KiB/s): min=15185, max=15185, per=20.06%, avg=15185.00, stdev= 0.00, samples=1 00:39:41.365 iops : min= 3796, max= 3796, avg=3796.00, stdev= 0.00, samples=1 00:39:41.365 lat (usec) : 750=0.01% 00:39:41.365 lat (msec) : 2=0.11%, 4=0.42%, 10=17.93%, 20=64.64%, 50=16.89% 00:39:41.365 cpu : usr=3.40%, sys=6.89%, ctx=332, majf=0, minf=1 00:39:41.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:41.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:41.365 issued rwts: total=4096,4400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:41.365 job3: (groupid=0, jobs=1): err= 0: pid=2339629: Sun Oct 6 11:34:38 2024 00:39:41.365 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:39:41.365 slat (nsec): min=1053, max=21891k, avg=100343.78, stdev=681523.02 00:39:41.365 clat (usec): min=898, max=62840, avg=13459.44, stdev=7502.72 00:39:41.365 lat (usec): min=921, max=62850, avg=13559.78, stdev=7560.58 00:39:41.365 clat percentiles (usec): 00:39:41.365 | 1.00th=[ 2073], 5.00th=[ 4752], 10.00th=[ 7570], 20.00th=[ 9634], 00:39:41.365 | 30.00th=[10552], 40.00th=[11338], 50.00th=[12256], 60.00th=[12911], 00:39:41.365 | 70.00th=[13435], 80.00th=[15270], 90.00th=[19006], 95.00th=[27132], 00:39:41.365 | 99.00th=[47973], 99.50th=[55313], 99.90th=[62653], 99.95th=[62653], 00:39:41.365 | 99.99th=[62653] 00:39:41.365 write: IOPS=4868, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1004msec); 0 zone resets 00:39:41.365 slat (nsec): min=1853, max=13494k, avg=86583.43, stdev=536972.06 00:39:41.365 clat (usec): min=762, max=55199, avg=13313.68, stdev=8058.84 00:39:41.365 lat (usec): min=772, max=59227, avg=13400.26, stdev=8084.97 00:39:41.365 clat percentiles (usec): 00:39:41.365 | 1.00th=[ 1205], 5.00th=[ 4621], 10.00th=[ 6587], 20.00th=[ 9765], 00:39:41.365 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11863], 60.00th=[12649], 00:39:41.365 | 70.00th=[13304], 80.00th=[13960], 90.00th=[20841], 95.00th=[27132], 00:39:41.365 | 99.00th=[50594], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:39:41.365 | 99.99th=[55313] 00:39:41.365 bw ( KiB/s): min=15480, max=22562, per=25.13%, avg=19021.00, stdev=5007.73, samples=2 00:39:41.365 iops : min= 3870, max= 5640, avg=4755.00, stdev=1251.58, samples=2 00:39:41.365 lat (usec) : 1000=0.07% 00:39:41.365 lat (msec) : 2=0.90%, 4=1.99%, 10=19.34%, 20=67.68%, 50=8.99% 00:39:41.365 lat (msec) : 100=1.02% 00:39:41.365 cpu : usr=2.09%, sys=5.08%, ctx=515, majf=0, minf=2 00:39:41.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:39:41.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:41.365 issued rwts: total=4608,4888,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:41.365 00:39:41.365 Run status group 0 (all jobs): 00:39:41.365 READ: bw=69.2MiB/s (72.5MB/s), 14.0MiB/s-21.4MiB/s (14.7MB/s-22.4MB/s), io=69.5MiB (72.9MB), run=1002-1005msec 00:39:41.365 WRITE: bw=73.9MiB/s (77.5MB/s), 15.9MiB/s-21.9MiB/s (16.7MB/s-23.0MB/s), io=74.3MiB (77.9MB), run=1002-1005msec 00:39:41.365 00:39:41.365 Disk stats (read/write): 00:39:41.365 nvme0n1: ios=3161/3584, merge=0/0, ticks=24438/34930, in_queue=59368, util=96.29% 00:39:41.365 nvme0n2: ios=4628/4821, merge=0/0, ticks=51580/42989, in_queue=94569, util=95.13% 00:39:41.365 nvme0n3: ios=3130/3584, merge=0/0, ticks=50662/55206, in_queue=105868, util=98.44% 00:39:41.365 nvme0n4: ios=4096/4595, merge=0/0, ticks=27630/34198, in_queue=61828, util=89.74% 00:39:41.365 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:41.365 [global] 00:39:41.365 thread=1 00:39:41.365 invalidate=1 00:39:41.365 rw=randwrite 00:39:41.365 time_based=1 00:39:41.365 runtime=1 00:39:41.365 ioengine=libaio 00:39:41.365 direct=1 00:39:41.365 bs=4096 00:39:41.365 iodepth=128 00:39:41.365 norandommap=0 00:39:41.365 numjobs=1 00:39:41.365 00:39:41.365 verify_dump=1 00:39:41.365 verify_backlog=512 00:39:41.365 verify_state_save=0 00:39:41.365 do_verify=1 00:39:41.365 verify=crc32c-intel 00:39:41.365 [job0] 00:39:41.365 filename=/dev/nvme0n1 00:39:41.365 [job1] 00:39:41.365 filename=/dev/nvme0n2 00:39:41.365 [job2] 00:39:41.365 filename=/dev/nvme0n3 00:39:41.365 [job3] 00:39:41.365 filename=/dev/nvme0n4 00:39:41.365 Could not set queue depth (nvme0n1) 00:39:41.365 Could not set queue depth (nvme0n2) 00:39:41.365 Could not set queue depth (nvme0n3) 00:39:41.365 Could not set queue depth (nvme0n4) 00:39:41.626 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:41.626 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:41.626 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:41.626 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:41.626 fio-3.35 00:39:41.626 Starting 4 threads 00:39:42.997 00:39:42.997 job0: (groupid=0, jobs=1): err= 0: pid=2340003: Sun Oct 6 11:34:40 2024 00:39:42.997 read: IOPS=7273, BW=28.4MiB/s (29.8MB/s)(28.5MiB/1003msec) 00:39:42.997 slat (nsec): min=1510, max=7615.0k, avg=60793.46, stdev=457937.14 00:39:42.997 clat (usec): min=1134, max=15577, avg=8113.72, stdev=2048.52 00:39:42.997 lat (usec): min=3822, max=22233, avg=8174.51, stdev=2066.96 00:39:42.997 clat percentiles (usec): 00:39:42.997 | 1.00th=[ 4146], 5.00th=[ 5276], 10.00th=[ 5932], 20.00th=[ 6521], 00:39:42.997 | 30.00th=[ 7046], 40.00th=[ 7373], 50.00th=[ 7635], 60.00th=[ 8029], 00:39:42.997 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[11731], 00:39:42.997 | 99.00th=[14615], 99.50th=[15008], 99.90th=[15139], 99.95th=[15139], 00:39:42.997 | 99.99th=[15533] 00:39:42.997 write: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec); 0 zone resets 00:39:42.997 slat (usec): min=2, max=15617, avg=66.33, stdev=563.41 00:39:42.997 clat (usec): min=1684, max=57434, avg=8710.26, stdev=7028.86 00:39:42.997 lat (usec): min=1694, max=57444, avg=8776.58, stdev=7068.58 00:39:42.997 clat percentiles (usec): 00:39:42.997 | 1.00th=[ 4228], 5.00th=[ 4752], 10.00th=[ 5014], 20.00th=[ 5669], 00:39:42.997 | 30.00th=[ 6521], 40.00th=[ 7046], 50.00th=[ 7635], 60.00th=[ 8029], 00:39:42.997 | 70.00th=[ 8291], 80.00th=[ 9896], 90.00th=[10421], 95.00th=[11076], 00:39:42.997 | 99.00th=[54789], 99.50th=[56361], 99.90th=[57410], 99.95th=[57410], 00:39:42.997 | 99.99th=[57410] 00:39:42.997 bw ( KiB/s): min=28672, max=32768, per=48.68%, avg=30720.00, stdev=2896.31, samples=2 00:39:42.997 iops : min= 7168, max= 8192, avg=7680.00, stdev=724.08, samples=2 00:39:42.997 lat (msec) : 2=0.09%, 4=0.41%, 10=81.14%, 20=16.86%, 50=0.87% 00:39:42.997 lat (msec) : 100=0.62% 00:39:42.997 cpu : usr=6.29%, sys=9.38%, ctx=401, majf=0, minf=1 00:39:42.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:39:42.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:42.997 issued rwts: total=7295,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:42.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:42.997 job1: (groupid=0, jobs=1): err= 0: pid=2340013: Sun Oct 6 11:34:40 2024 00:39:42.997 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:39:42.997 slat (nsec): min=1130, max=16285k, avg=174485.85, stdev=1125292.06 00:39:42.997 clat (usec): min=6390, max=63314, avg=22917.80, stdev=12142.05 00:39:42.997 lat (usec): min=6396, max=63322, avg=23092.28, stdev=12196.25 00:39:42.997 clat percentiles (usec): 00:39:42.997 | 1.00th=[ 6980], 5.00th=[ 7635], 10.00th=[ 8160], 20.00th=[14484], 00:39:42.997 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17957], 60.00th=[21627], 00:39:42.997 | 70.00th=[27395], 80.00th=[34341], 90.00th=[41681], 95.00th=[46400], 00:39:42.997 | 99.00th=[58983], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:39:42.997 | 99.99th=[63177] 00:39:42.997 write: IOPS=2767, BW=10.8MiB/s (11.3MB/s)(10.9MiB/1009msec); 0 zone resets 00:39:42.997 slat (nsec): min=1812, max=20466k, avg=189889.65, stdev=1116695.04 00:39:42.997 clat (usec): min=1483, max=73012, avg=24779.23, stdev=18047.07 00:39:42.997 lat (usec): min=1497, max=73033, avg=24969.12, stdev=18137.84 00:39:42.997 clat percentiles (usec): 00:39:42.997 | 1.00th=[ 5604], 5.00th=[ 6915], 10.00th=[ 9241], 20.00th=[ 9503], 00:39:42.997 | 30.00th=[10028], 40.00th=[16581], 50.00th=[17433], 60.00th=[22676], 00:39:42.997 | 70.00th=[26346], 80.00th=[42730], 90.00th=[57410], 95.00th=[61604], 00:39:42.997 | 99.00th=[72877], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:39:42.997 | 99.99th=[72877] 00:39:42.997 bw ( KiB/s): min= 9024, max=12288, per=16.89%, avg=10656.00, stdev=2308.00, samples=2 00:39:42.997 iops : min= 2256, max= 3072, avg=2664.00, stdev=577.00, samples=2 00:39:42.997 lat (msec) : 2=0.06%, 10=23.58%, 20=31.52%, 50=35.01%, 100=9.83% 00:39:42.997 cpu : usr=1.88%, sys=3.87%, ctx=270, majf=0, minf=2 00:39:42.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:39:42.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:42.997 issued rwts: total=2560,2792,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:42.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:42.997 job2: (groupid=0, jobs=1): err= 0: pid=2340026: Sun Oct 6 11:34:40 2024 00:39:42.997 read: IOPS=2074, BW=8298KiB/s (8497kB/s)(8696KiB/1048msec) 00:39:42.997 slat (nsec): min=1592, max=12927k, avg=150457.74, stdev=877783.46 00:39:42.997 clat (usec): min=4335, max=75795, avg=18680.13, stdev=12992.24 00:39:42.997 lat (usec): min=4345, max=75799, avg=18830.58, stdev=13043.61 00:39:42.997 clat percentiles (usec): 00:39:42.997 | 1.00th=[ 5538], 5.00th=[ 7767], 10.00th=[10552], 20.00th=[11338], 00:39:42.997 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12649], 60.00th=[15795], 00:39:42.997 | 70.00th=[20317], 80.00th=[24511], 90.00th=[30802], 95.00th=[53740], 00:39:42.997 | 99.00th=[70779], 99.50th=[73925], 99.90th=[76022], 99.95th=[76022], 00:39:42.997 | 99.99th=[76022] 00:39:42.997 write: IOPS=2442, BW=9771KiB/s (10.0MB/s)(10.0MiB/1048msec); 0 zone resets 00:39:42.997 slat (usec): min=2, max=11795, avg=258.84, stdev=1050.67 00:39:42.997 clat (msec): min=3, max=110, avg=35.97, stdev=23.16 00:39:42.997 lat (msec): min=3, max=110, avg=36.23, stdev=23.30 00:39:42.997 clat percentiles (msec): 00:39:42.997 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 15], 00:39:42.997 | 30.00th=[ 26], 40.00th=[ 28], 50.00th=[ 31], 60.00th=[ 34], 00:39:42.997 | 70.00th=[ 39], 80.00th=[ 56], 90.00th=[ 67], 95.00th=[ 86], 00:39:42.997 | 99.00th=[ 107], 99.50th=[ 110], 99.90th=[ 111], 99.95th=[ 111], 00:39:42.997 | 99.99th=[ 111] 00:39:42.997 bw ( KiB/s): min= 9584, max=10880, per=16.21%, avg=10232.00, stdev=916.41, samples=2 00:39:42.997 iops : min= 2396, max= 2720, avg=2558.00, stdev=229.10, samples=2 00:39:42.997 lat (msec) : 4=0.38%, 10=5.83%, 20=38.99%, 50=39.31%, 100=13.46% 00:39:42.997 lat (msec) : 250=2.03% 00:39:42.998 cpu : usr=1.43%, sys=3.25%, ctx=324, majf=0, minf=1 00:39:42.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:39:42.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:42.998 issued rwts: total=2174,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:42.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:42.998 job3: (groupid=0, jobs=1): err= 0: pid=2340028: Sun Oct 6 11:34:40 2024 00:39:42.998 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec) 00:39:42.998 slat (nsec): min=1199, max=10524k, avg=128563.42, stdev=802843.43 00:39:42.998 clat (usec): min=4654, max=42554, avg=14810.21, stdev=5802.25 00:39:42.998 lat (usec): min=4659, max=42561, avg=14938.77, stdev=5872.89 00:39:42.998 clat percentiles (usec): 00:39:42.998 | 1.00th=[ 6194], 5.00th=[10683], 10.00th=[10945], 20.00th=[11731], 00:39:42.998 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12780], 60.00th=[13173], 00:39:42.998 | 70.00th=[13698], 80.00th=[17695], 90.00th=[19792], 95.00th=[27919], 00:39:42.998 | 99.00th=[39060], 99.50th=[40109], 99.90th=[42730], 99.95th=[42730], 00:39:42.998 | 99.99th=[42730] 00:39:42.998 write: IOPS=3457, BW=13.5MiB/s (14.2MB/s)(13.7MiB/1013msec); 0 zone resets 00:39:42.998 slat (usec): min=2, max=11203, avg=168.63, stdev=816.59 00:39:42.998 clat (usec): min=3080, max=67201, avg=23523.00, stdev=15875.64 00:39:42.998 lat (usec): min=3089, max=67205, avg=23691.63, stdev=15975.59 00:39:42.998 clat percentiles (usec): 00:39:42.998 | 1.00th=[ 4080], 5.00th=[ 7308], 10.00th=[ 7963], 20.00th=[10421], 00:39:42.998 | 30.00th=[12125], 40.00th=[13435], 50.00th=[16319], 60.00th=[26084], 00:39:42.998 | 70.00th=[30540], 80.00th=[33424], 90.00th=[49021], 95.00th=[58983], 00:39:42.998 | 99.00th=[65274], 99.50th=[65799], 99.90th=[67634], 99.95th=[67634], 00:39:42.998 | 99.99th=[67634] 00:39:42.998 bw ( KiB/s): min=10608, max=16384, per=21.39%, avg=13496.00, stdev=4084.25, samples=2 00:39:42.998 iops : min= 2652, max= 4096, avg=3374.00, stdev=1021.06, samples=2 00:39:42.998 lat (msec) : 4=0.46%, 10=10.85%, 20=59.31%, 50=24.13%, 100=5.26% 00:39:42.998 cpu : usr=2.37%, sys=3.26%, ctx=322, majf=0, minf=1 00:39:42.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:39:42.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:42.998 issued rwts: total=3072,3502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:42.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:42.998 00:39:42.998 Run status group 0 (all jobs): 00:39:42.998 READ: bw=56.3MiB/s (59.0MB/s), 8298KiB/s-28.4MiB/s (8497kB/s-29.8MB/s), io=59.0MiB (61.9MB), run=1003-1048msec 00:39:42.998 WRITE: bw=61.6MiB/s (64.6MB/s), 9771KiB/s-29.9MiB/s (10.0MB/s-31.4MB/s), io=64.6MiB (67.7MB), run=1003-1048msec 00:39:42.998 00:39:42.998 Disk stats (read/write): 00:39:42.998 nvme0n1: ios=6194/6495, merge=0/0, ticks=47575/48071, in_queue=95646, util=91.58% 00:39:42.998 nvme0n2: ios=2094/2560, merge=0/0, ticks=13859/20310, in_queue=34169, util=94.22% 00:39:42.998 nvme0n3: ios=2105/2119, merge=0/0, ticks=31593/67618, in_queue=99211, util=98.86% 00:39:42.998 nvme0n4: ios=2579/2999, merge=0/0, ticks=36874/66883, in_queue=103757, util=97.80% 00:39:42.998 11:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:42.998 11:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2340131 00:39:42.998 11:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:42.998 11:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:42.998 [global] 00:39:42.998 thread=1 00:39:42.998 invalidate=1 00:39:42.998 rw=read 00:39:42.998 time_based=1 00:39:42.998 runtime=10 00:39:42.998 ioengine=libaio 00:39:42.998 direct=1 00:39:42.998 bs=4096 00:39:42.998 iodepth=1 00:39:42.998 norandommap=1 00:39:42.998 numjobs=1 00:39:42.998 00:39:42.998 [job0] 00:39:42.998 filename=/dev/nvme0n1 00:39:42.998 [job1] 00:39:42.998 filename=/dev/nvme0n2 00:39:42.998 [job2] 00:39:42.998 filename=/dev/nvme0n3 00:39:42.998 [job3] 00:39:42.998 filename=/dev/nvme0n4 00:39:42.998 Could not set queue depth (nvme0n1) 00:39:42.998 Could not set queue depth (nvme0n2) 00:39:42.998 Could not set queue depth (nvme0n3) 00:39:42.998 Could not set queue depth (nvme0n4) 00:39:43.255 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:43.255 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:43.255 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:43.255 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:43.255 fio-3.35 00:39:43.255 Starting 4 threads 00:39:45.789 11:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:46.046 11:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:46.046 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=278528, buflen=4096 00:39:46.046 fio: pid=2340398, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:46.304 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=15212544, buflen=4096 00:39:46.304 fio: pid=2340397, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:46.304 11:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:46.304 11:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:46.561 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=29704192, buflen=4096 00:39:46.561 fio: pid=2340395, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:46.561 11:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:46.561 11:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:46.561 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12660736, buflen=4096 00:39:46.561 fio: pid=2340396, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:46.561 11:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:46.561 11:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:46.818 00:39:46.818 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2340395: Sun Oct 6 11:34:44 2024 00:39:46.819 read: IOPS=2293, BW=9171KiB/s (9391kB/s)(28.3MiB/3163msec) 00:39:46.819 slat (usec): min=6, max=32719, avg=15.24, stdev=437.04 00:39:46.819 clat (usec): min=259, max=42309, avg=416.10, stdev=2089.02 00:39:46.819 lat (usec): min=270, max=74012, avg=431.33, stdev=2264.22 00:39:46.819 clat percentiles (usec): 00:39:46.819 | 1.00th=[ 269], 5.00th=[ 273], 10.00th=[ 273], 20.00th=[ 277], 00:39:46.819 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 318], 00:39:46.819 | 70.00th=[ 322], 80.00th=[ 334], 90.00th=[ 371], 95.00th=[ 375], 00:39:46.819 | 99.00th=[ 441], 99.50th=[ 482], 99.90th=[41157], 99.95th=[41157], 00:39:46.819 | 99.99th=[42206] 00:39:46.819 bw ( KiB/s): min= 93, max=13888, per=57.59%, avg=9664.83, stdev=5277.88, samples=6 00:39:46.819 iops : min= 23, max= 3472, avg=2416.17, stdev=1319.56, samples=6 00:39:46.819 lat (usec) : 500=99.57%, 750=0.14% 00:39:46.819 lat (msec) : 10=0.01%, 50=0.26% 00:39:46.819 cpu : usr=0.79%, sys=3.26%, ctx=7260, majf=0, minf=1 00:39:46.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:46.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.819 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.819 issued rwts: total=7253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:46.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:46.819 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2340396: Sun Oct 6 11:34:44 2024 00:39:46.819 read: IOPS=918, BW=3672KiB/s (3760kB/s)(12.1MiB/3367msec) 00:39:46.819 slat (usec): min=2, max=15654, avg=32.41, stdev=572.42 00:39:46.819 clat (usec): min=242, max=42892, avg=1046.76, stdev=5403.59 00:39:46.819 lat (usec): min=245, max=42922, avg=1079.18, stdev=5433.02 00:39:46.819 clat percentiles (usec): 00:39:46.819 | 1.00th=[ 251], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 277], 00:39:46.819 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 302], 00:39:46.819 | 70.00th=[ 314], 80.00th=[ 367], 90.00th=[ 375], 95.00th=[ 441], 00:39:46.819 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:39:46.819 | 99.99th=[42730] 00:39:46.819 bw ( KiB/s): min= 96, max=11307, per=17.50%, avg=2937.83, stdev=4714.86, samples=6 00:39:46.819 iops : min= 24, max= 2826, avg=734.33, stdev=1178.45, samples=6 00:39:46.819 lat (usec) : 250=0.65%, 500=97.02%, 750=0.39%, 1000=0.03% 00:39:46.819 lat (msec) : 4=0.03%, 20=0.06%, 50=1.78% 00:39:46.819 cpu : usr=0.39%, sys=1.07%, ctx=3099, majf=0, minf=2 00:39:46.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:46.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.819 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.819 issued rwts: total=3092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:46.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:46.819 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2340397: Sun Oct 6 11:34:44 2024 00:39:46.819 read: IOPS=1259, BW=5036KiB/s (5157kB/s)(14.5MiB/2950msec) 00:39:46.819 slat (nsec): min=6586, max=31029, avg=8167.99, stdev=2121.10 00:39:46.819 clat (usec): min=270, max=41976, avg=777.74, stdev=4127.11 00:39:46.819 lat (usec): min=277, max=41999, avg=785.91, stdev=4128.53 00:39:46.819 clat percentiles (usec): 00:39:46.819 | 1.00th=[ 285], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 302], 00:39:46.819 | 30.00th=[ 318], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 367], 00:39:46.819 | 70.00th=[ 371], 80.00th=[ 375], 90.00th=[ 375], 95.00th=[ 383], 00:39:46.819 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:39:46.819 | 99.99th=[42206] 00:39:46.819 bw ( KiB/s): min= 104, max=10320, per=25.09%, avg=4211.20, stdev=4292.14, samples=5 00:39:46.819 iops : min= 26, max= 2580, avg=1052.80, stdev=1073.04, samples=5 00:39:46.819 lat (usec) : 500=98.57%, 750=0.22% 00:39:46.819 lat (msec) : 4=0.03%, 10=0.05%, 20=0.05%, 50=1.05% 00:39:46.819 cpu : usr=0.37%, sys=1.29%, ctx=3717, majf=0, minf=2 00:39:46.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:46.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.819 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.819 issued rwts: total=3715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:46.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:46.819 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2340398: Sun Oct 6 11:34:44 2024 00:39:46.819 read: IOPS=24, BW=97.9KiB/s (100kB/s)(272KiB/2777msec) 00:39:46.819 slat (nsec): min=11397, max=34055, avg=23948.45, stdev=2829.01 00:39:46.819 clat (usec): min=571, max=42231, avg=40429.47, stdev=4911.70 00:39:46.819 lat (usec): min=605, max=42257, avg=40453.44, stdev=4910.47 00:39:46.819 clat percentiles (usec): 00:39:46.819 | 1.00th=[ 570], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:39:46.819 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:46.819 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:39:46.819 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:46.819 | 99.99th=[42206] 00:39:46.819 bw ( KiB/s): min= 96, max= 104, per=0.58%, avg=97.60, stdev= 3.58, samples=5 00:39:46.819 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:39:46.819 lat (usec) : 750=1.45% 00:39:46.819 lat (msec) : 50=97.10% 00:39:46.819 cpu : usr=0.00%, sys=0.11%, ctx=70, majf=0, minf=2 00:39:46.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:46.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.819 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.819 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:46.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:46.819 00:39:46.819 Run status group 0 (all jobs): 00:39:46.819 READ: bw=16.4MiB/s (17.2MB/s), 97.9KiB/s-9171KiB/s (100kB/s-9391kB/s), io=55.2MiB (57.9MB), run=2777-3367msec 00:39:46.819 00:39:46.819 Disk stats (read/write): 00:39:46.819 nvme0n1: ios=7286/0, merge=0/0, ticks=3324/0, in_queue=3324, util=97.53% 00:39:46.819 nvme0n2: ios=3092/0, merge=0/0, ticks=3199/0, in_queue=3199, util=93.88% 00:39:46.819 nvme0n3: ios=3574/0, merge=0/0, ticks=3727/0, in_queue=3727, util=98.95% 00:39:46.819 nvme0n4: ios=64/0, merge=0/0, ticks=2587/0, in_queue=2587, util=96.45% 00:39:46.819 11:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:46.819 11:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:47.077 11:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:47.077 11:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:47.335 11:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:47.335 11:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:47.593 11:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:47.593 11:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:47.593 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:47.593 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2340131 00:39:47.593 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:47.593 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:47.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:47.850 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:47.850 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:39:47.850 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:47.850 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:47.850 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:47.850 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:47.850 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:39:47.850 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:47.850 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:47.850 nvmf hotplug test: fio failed as expected 00:39:47.850 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:48.108 rmmod nvme_tcp 00:39:48.108 rmmod nvme_fabrics 00:39:48.108 rmmod nvme_keyring 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 2337626 ']' 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 2337626 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2337626 ']' 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2337626 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2337626 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2337626' 00:39:48.108 killing process with pid 2337626 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2337626 00:39:48.108 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2337626 00:39:48.366 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:48.366 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:48.366 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:48.366 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:39:48.366 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:39:48.366 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:39:48.366 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:48.366 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:48.366 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:48.366 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:48.366 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:48.366 11:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:50.896 11:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:50.896 00:39:50.896 real 0m25.493s 00:39:50.896 user 1m30.501s 00:39:50.896 sys 0m11.082s 00:39:50.896 11:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:50.896 11:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:50.896 ************************************ 00:39:50.896 END TEST nvmf_fio_target 00:39:50.896 ************************************ 00:39:50.896 11:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:50.896 11:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:50.896 11:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:50.896 11:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:50.896 ************************************ 00:39:50.896 START TEST nvmf_bdevio 00:39:50.896 ************************************ 00:39:50.896 11:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:50.896 * Looking for test storage... 00:39:50.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:50.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.896 --rc genhtml_branch_coverage=1 00:39:50.896 --rc genhtml_function_coverage=1 00:39:50.896 --rc genhtml_legend=1 00:39:50.896 --rc geninfo_all_blocks=1 00:39:50.896 --rc geninfo_unexecuted_blocks=1 00:39:50.896 00:39:50.896 ' 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:50.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.896 --rc genhtml_branch_coverage=1 00:39:50.896 --rc genhtml_function_coverage=1 00:39:50.896 --rc genhtml_legend=1 00:39:50.896 --rc geninfo_all_blocks=1 00:39:50.896 --rc geninfo_unexecuted_blocks=1 00:39:50.896 00:39:50.896 ' 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:50.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.896 --rc genhtml_branch_coverage=1 00:39:50.896 --rc genhtml_function_coverage=1 00:39:50.896 --rc genhtml_legend=1 00:39:50.896 --rc geninfo_all_blocks=1 00:39:50.896 --rc geninfo_unexecuted_blocks=1 00:39:50.896 00:39:50.896 ' 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:50.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.896 --rc genhtml_branch_coverage=1 00:39:50.896 --rc genhtml_function_coverage=1 00:39:50.896 --rc genhtml_legend=1 00:39:50.896 --rc geninfo_all_blocks=1 00:39:50.896 --rc geninfo_unexecuted_blocks=1 00:39:50.896 00:39:50.896 ' 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:50.896 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:39:50.897 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:56.160 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:56.160 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:56.161 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:56.161 Found net devices under 0000:af:00.0: cvl_0_0 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:56.161 Found net devices under 0000:af:00.1: cvl_0_1 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:56.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:56.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:39:56.161 00:39:56.161 --- 10.0.0.2 ping statistics --- 00:39:56.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:56.161 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:56.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:56.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:39:56.161 00:39:56.161 --- 10.0.0.1 ping statistics --- 00:39:56.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:56.161 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=2344542 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 2344542 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2344542 ']' 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:56.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:56.161 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:56.161 [2024-10-06 11:34:53.660728] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:56.161 [2024-10-06 11:34:53.661629] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:39:56.162 [2024-10-06 11:34:53.661664] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:56.162 [2024-10-06 11:34:53.717881] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:56.419 [2024-10-06 11:34:53.757396] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:56.419 [2024-10-06 11:34:53.757436] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:56.419 [2024-10-06 11:34:53.757444] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:56.419 [2024-10-06 11:34:53.757450] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:56.419 [2024-10-06 11:34:53.757455] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:56.419 [2024-10-06 11:34:53.759021] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:39:56.420 [2024-10-06 11:34:53.759127] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:39:56.420 [2024-10-06 11:34:53.759235] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:39:56.420 [2024-10-06 11:34:53.759237] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:39:56.420 [2024-10-06 11:34:53.833414] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:56.420 [2024-10-06 11:34:53.833598] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:56.420 [2024-10-06 11:34:53.834452] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:56.420 [2024-10-06 11:34:53.834528] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:56.420 [2024-10-06 11:34:53.834594] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:56.420 [2024-10-06 11:34:53.903679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:56.420 Malloc0 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:56.420 [2024-10-06 11:34:53.971904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:56.420 { 00:39:56.420 "params": { 00:39:56.420 "name": "Nvme$subsystem", 00:39:56.420 "trtype": "$TEST_TRANSPORT", 00:39:56.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:56.420 "adrfam": "ipv4", 00:39:56.420 "trsvcid": "$NVMF_PORT", 00:39:56.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:56.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:56.420 "hdgst": ${hdgst:-false}, 00:39:56.420 "ddgst": ${ddgst:-false} 00:39:56.420 }, 00:39:56.420 "method": "bdev_nvme_attach_controller" 00:39:56.420 } 00:39:56.420 EOF 00:39:56.420 )") 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:39:56.420 11:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:39:56.420 "params": { 00:39:56.420 "name": "Nvme1", 00:39:56.420 "trtype": "tcp", 00:39:56.420 "traddr": "10.0.0.2", 00:39:56.420 "adrfam": "ipv4", 00:39:56.420 "trsvcid": "4420", 00:39:56.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:56.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:56.420 "hdgst": false, 00:39:56.420 "ddgst": false 00:39:56.420 }, 00:39:56.420 "method": "bdev_nvme_attach_controller" 00:39:56.420 }' 00:39:56.678 [2024-10-06 11:34:54.020347] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:39:56.678 [2024-10-06 11:34:54.020390] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2344575 ] 00:39:56.678 [2024-10-06 11:34:54.076011] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:56.678 [2024-10-06 11:34:54.117207] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:39:56.678 [2024-10-06 11:34:54.117306] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:39:56.678 [2024-10-06 11:34:54.117308] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:39:56.935 I/O targets: 00:39:56.935 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:39:56.935 00:39:56.935 00:39:56.935 CUnit - A unit testing framework for C - Version 2.1-3 00:39:56.935 http://cunit.sourceforge.net/ 00:39:56.935 00:39:56.935 00:39:56.935 Suite: bdevio tests on: Nvme1n1 00:39:56.935 Test: blockdev write read block ...passed 00:39:56.935 Test: blockdev write zeroes read block ...passed 00:39:56.935 Test: blockdev write zeroes read no split ...passed 00:39:56.935 Test: blockdev write zeroes read split ...passed 00:39:56.935 Test: blockdev write zeroes read split partial ...passed 00:39:56.935 Test: blockdev reset ...[2024-10-06 11:34:54.457901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:56.935 [2024-10-06 11:34:54.457961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c4bd0 (9): Bad file descriptor 00:39:56.935 [2024-10-06 11:34:54.503348] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:56.935 passed 00:39:57.192 Test: blockdev write read 8 blocks ...passed 00:39:57.192 Test: blockdev write read size > 128k ...passed 00:39:57.192 Test: blockdev write read invalid size ...passed 00:39:57.192 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:57.192 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:57.192 Test: blockdev write read max offset ...passed 00:39:57.192 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:57.192 Test: blockdev writev readv 8 blocks ...passed 00:39:57.192 Test: blockdev writev readv 30 x 1block ...passed 00:39:57.192 Test: blockdev writev readv block ...passed 00:39:57.192 Test: blockdev writev readv size > 128k ...passed 00:39:57.192 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:57.192 Test: blockdev comparev and writev ...[2024-10-06 11:34:54.715549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:57.193 [2024-10-06 11:34:54.715584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:57.193 [2024-10-06 11:34:54.715599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:57.193 [2024-10-06 11:34:54.715607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:57.193 [2024-10-06 11:34:54.715930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:57.193 [2024-10-06 11:34:54.715942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:39:57.193 [2024-10-06 11:34:54.715957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:57.193 [2024-10-06 11:34:54.715964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:39:57.193 [2024-10-06 11:34:54.716297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:57.193 [2024-10-06 11:34:54.716310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:39:57.193 [2024-10-06 11:34:54.716323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:57.193 [2024-10-06 11:34:54.716331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:57.193 [2024-10-06 11:34:54.716656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:57.193 [2024-10-06 11:34:54.716668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:39:57.193 [2024-10-06 11:34:54.716680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:57.193 [2024-10-06 11:34:54.716688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:39:57.193 passed 00:39:57.450 Test: blockdev nvme passthru rw ...passed 00:39:57.450 Test: blockdev nvme passthru vendor specific ...[2024-10-06 11:34:54.799379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:57.450 [2024-10-06 11:34:54.799401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:57.450 [2024-10-06 11:34:54.799536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:57.450 [2024-10-06 11:34:54.799547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:39:57.450 [2024-10-06 11:34:54.799678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:57.450 [2024-10-06 11:34:54.799689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:57.450 [2024-10-06 11:34:54.799810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:57.450 [2024-10-06 11:34:54.799821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:57.450 passed 00:39:57.450 Test: blockdev nvme admin passthru ...passed 00:39:57.450 Test: blockdev copy ...passed 00:39:57.450 00:39:57.450 Run Summary: Type Total Ran Passed Failed Inactive 00:39:57.450 suites 1 1 n/a 0 0 00:39:57.450 tests 23 23 23 0 0 00:39:57.450 asserts 152 152 152 0 n/a 00:39:57.450 00:39:57.450 Elapsed time = 1.115 seconds 00:39:57.450 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:57.450 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:57.450 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:57.450 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:57.450 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:57.708 rmmod nvme_tcp 00:39:57.708 rmmod nvme_fabrics 00:39:57.708 rmmod nvme_keyring 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 2344542 ']' 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 2344542 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2344542 ']' 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2344542 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2344542 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2344542' 00:39:57.708 killing process with pid 2344542 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2344542 00:39:57.708 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2344542 00:39:57.966 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:57.966 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:57.966 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:57.966 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:39:57.966 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:57.966 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:39:57.966 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:39:57.966 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:57.966 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:57.966 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:57.966 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:57.966 11:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:59.865 11:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:59.865 00:39:59.865 real 0m9.458s 00:39:59.865 user 0m8.364s 00:39:59.865 sys 0m4.924s 00:39:59.865 11:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:59.865 11:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:59.865 ************************************ 00:39:59.865 END TEST nvmf_bdevio 00:39:59.865 ************************************ 00:40:00.123 11:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:00.123 00:40:00.123 real 4m23.865s 00:40:00.123 user 8m58.102s 00:40:00.123 sys 1m47.979s 00:40:00.123 11:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:00.123 11:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:00.123 ************************************ 00:40:00.123 END TEST nvmf_target_core_interrupt_mode 00:40:00.123 ************************************ 00:40:00.123 11:34:57 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:00.123 11:34:57 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:00.123 11:34:57 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:00.123 11:34:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:00.123 ************************************ 00:40:00.123 START TEST nvmf_interrupt 00:40:00.123 ************************************ 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:00.123 * Looking for test storage... 00:40:00.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:00.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.123 --rc genhtml_branch_coverage=1 00:40:00.123 --rc genhtml_function_coverage=1 00:40:00.123 --rc genhtml_legend=1 00:40:00.123 --rc geninfo_all_blocks=1 00:40:00.123 --rc geninfo_unexecuted_blocks=1 00:40:00.123 00:40:00.123 ' 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:00.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.123 --rc genhtml_branch_coverage=1 00:40:00.123 --rc genhtml_function_coverage=1 00:40:00.123 --rc genhtml_legend=1 00:40:00.123 --rc geninfo_all_blocks=1 00:40:00.123 --rc geninfo_unexecuted_blocks=1 00:40:00.123 00:40:00.123 ' 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:00.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.123 --rc genhtml_branch_coverage=1 00:40:00.123 --rc genhtml_function_coverage=1 00:40:00.123 --rc genhtml_legend=1 00:40:00.123 --rc geninfo_all_blocks=1 00:40:00.123 --rc geninfo_unexecuted_blocks=1 00:40:00.123 00:40:00.123 ' 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:00.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.123 --rc genhtml_branch_coverage=1 00:40:00.123 --rc genhtml_function_coverage=1 00:40:00.123 --rc genhtml_legend=1 00:40:00.123 --rc geninfo_all_blocks=1 00:40:00.123 --rc geninfo_unexecuted_blocks=1 00:40:00.123 00:40:00.123 ' 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:00.123 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:00.124 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:00.124 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:40:00.381 11:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:05.652 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:05.653 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:05.653 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:05.653 11:35:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:05.653 Found net devices under 0000:af:00.0: cvl_0_0 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:05.653 Found net devices under 0000:af:00.1: cvl_0_1 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:05.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:05.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:40:05.653 00:40:05.653 --- 10.0.0.2 ping statistics --- 00:40:05.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:05.653 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:40:05.653 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:05.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:05.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:40:05.912 00:40:05.912 --- 10.0.0.1 ping statistics --- 00:40:05.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:05.912 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:40:05.912 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:05.912 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:40:05.912 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:05.912 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:05.912 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:05.912 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:05.912 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:05.913 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:05.913 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:05.913 11:35:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:40:05.913 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:05.913 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:05.913 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:05.913 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=2348086 00:40:05.913 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 2348086 00:40:05.913 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:05.913 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 2348086 ']' 00:40:05.913 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:05.913 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:05.913 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:05.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:05.913 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:05.913 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:05.913 [2024-10-06 11:35:03.322893] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:05.913 [2024-10-06 11:35:03.323774] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:40:05.913 [2024-10-06 11:35:03.323807] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:05.913 [2024-10-06 11:35:03.380350] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:05.913 [2024-10-06 11:35:03.419429] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:05.913 [2024-10-06 11:35:03.419468] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:05.913 [2024-10-06 11:35:03.419478] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:05.913 [2024-10-06 11:35:03.419484] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:05.913 [2024-10-06 11:35:03.419489] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:05.913 [2024-10-06 11:35:03.420200] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:05.913 [2024-10-06 11:35:03.420201] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:05.913 [2024-10-06 11:35:03.480496] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:05.913 [2024-10-06 11:35:03.480766] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:05.913 [2024-10-06 11:35:03.480811] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:40:06.172 5000+0 records in 00:40:06.172 5000+0 records out 00:40:06.172 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0184116 s, 556 MB/s 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:06.172 AIO0 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:06.172 [2024-10-06 11:35:03.628987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:06.172 [2024-10-06 11:35:03.673270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2348086 0 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2348086 0 idle 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2348086 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2348086 -w 256 00:40:06.172 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2348086 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.21 reactor_0' 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2348086 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.21 reactor_0 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2348086 1 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2348086 1 idle 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2348086 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2348086 -w 256 00:40:06.432 11:35:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2348130 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2348130 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2348304 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2348086 0 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2348086 0 busy 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2348086 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2348086 -w 256 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2348086 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:00.37 reactor_0' 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2348086 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:00.37 reactor_0 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2348086 1 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2348086 1 busy 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2348086 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2348086 -w 256 00:40:06.692 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:06.951 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2348130 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:00.26 reactor_1' 00:40:06.951 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2348130 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:00.26 reactor_1 00:40:06.951 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:06.951 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:06.951 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:06.951 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:06.951 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:06.951 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:06.951 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:06.951 11:35:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:06.951 11:35:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2348304 00:40:17.003 Initializing NVMe Controllers 00:40:17.003 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:17.003 Controller IO queue size 256, less than required. 00:40:17.003 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:17.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:17.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:17.003 Initialization complete. Launching workers. 00:40:17.003 ======================================================== 00:40:17.003 Latency(us) 00:40:17.003 Device Information : IOPS MiB/s Average min max 00:40:17.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16957.25 66.24 15104.67 3034.68 21081.38 00:40:17.004 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16621.65 64.93 15407.18 4941.73 21313.80 00:40:17.004 ======================================================== 00:40:17.004 Total : 33578.89 131.17 15254.41 3034.68 21313.80 00:40:17.004 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2348086 0 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2348086 0 idle 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2348086 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2348086 -w 256 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2348086 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:20.20 reactor_0' 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2348086 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:20.20 reactor_0 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2348086 1 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2348086 1 idle 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2348086 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2348086 -w 256 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2348130 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1' 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2348130 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:17.004 11:35:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:17.573 11:35:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:17.573 11:35:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:40:17.573 11:35:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:40:17.573 11:35:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:40:17.573 11:35:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2348086 0 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2348086 0 idle 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2348086 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2348086 -w 256 00:40:19.480 11:35:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2348086 root 20 0 128.2g 72192 33792 S 0.0 0.1 0:20.39 reactor_0' 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2348086 root 20 0 128.2g 72192 33792 S 0.0 0.1 0:20.39 reactor_0 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2348086 1 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2348086 1 idle 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2348086 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2348086 -w 256 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2348130 root 20 0 128.2g 72192 33792 S 0.0 0.1 0:10.07 reactor_1' 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2348130 root 20 0 128.2g 72192 33792 S 0.0 0.1 0:10.07 reactor_1 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:19.740 11:35:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:20.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:20.000 rmmod nvme_tcp 00:40:20.000 rmmod nvme_fabrics 00:40:20.000 rmmod nvme_keyring 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 2348086 ']' 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 2348086 00:40:20.000 11:35:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 2348086 ']' 00:40:20.001 11:35:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 2348086 00:40:20.001 11:35:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:40:20.001 11:35:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:20.001 11:35:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2348086 00:40:20.260 11:35:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:20.260 11:35:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:20.260 11:35:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2348086' 00:40:20.260 killing process with pid 2348086 00:40:20.260 11:35:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 2348086 00:40:20.260 11:35:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 2348086 00:40:20.260 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:20.260 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:20.260 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:20.260 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:20.260 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:40:20.260 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:40:20.260 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:20.260 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:20.260 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:20.260 11:35:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:20.260 11:35:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:20.260 11:35:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:22.797 11:35:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:22.797 00:40:22.797 real 0m22.370s 00:40:22.797 user 0m39.422s 00:40:22.797 sys 0m8.181s 00:40:22.797 11:35:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:22.797 11:35:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:22.797 ************************************ 00:40:22.797 END TEST nvmf_interrupt 00:40:22.797 ************************************ 00:40:22.797 00:40:22.797 real 34m24.015s 00:40:22.797 user 85m7.696s 00:40:22.797 sys 9m56.408s 00:40:22.797 11:35:19 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:22.797 11:35:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:22.797 ************************************ 00:40:22.797 END TEST nvmf_tcp 00:40:22.797 ************************************ 00:40:22.797 11:35:19 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:40:22.798 11:35:19 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:22.798 11:35:19 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:22.798 11:35:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:22.798 11:35:19 -- common/autotest_common.sh@10 -- # set +x 00:40:22.798 ************************************ 00:40:22.798 START TEST spdkcli_nvmf_tcp 00:40:22.798 ************************************ 00:40:22.798 11:35:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:22.798 * Looking for test storage... 00:40:22.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:22.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:22.798 --rc genhtml_branch_coverage=1 00:40:22.798 --rc genhtml_function_coverage=1 00:40:22.798 --rc genhtml_legend=1 00:40:22.798 --rc geninfo_all_blocks=1 00:40:22.798 --rc geninfo_unexecuted_blocks=1 00:40:22.798 00:40:22.798 ' 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:22.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:22.798 --rc genhtml_branch_coverage=1 00:40:22.798 --rc genhtml_function_coverage=1 00:40:22.798 --rc genhtml_legend=1 00:40:22.798 --rc geninfo_all_blocks=1 00:40:22.798 --rc geninfo_unexecuted_blocks=1 00:40:22.798 00:40:22.798 ' 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:22.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:22.798 --rc genhtml_branch_coverage=1 00:40:22.798 --rc genhtml_function_coverage=1 00:40:22.798 --rc genhtml_legend=1 00:40:22.798 --rc geninfo_all_blocks=1 00:40:22.798 --rc geninfo_unexecuted_blocks=1 00:40:22.798 00:40:22.798 ' 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:22.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:22.798 --rc genhtml_branch_coverage=1 00:40:22.798 --rc genhtml_function_coverage=1 00:40:22.798 --rc genhtml_legend=1 00:40:22.798 --rc geninfo_all_blocks=1 00:40:22.798 --rc geninfo_unexecuted_blocks=1 00:40:22.798 00:40:22.798 ' 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:22.798 11:35:20 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:22.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2350931 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2350931 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 2350931 ']' 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:22.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:22.799 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:22.799 [2024-10-06 11:35:20.245858] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:40:22.799 [2024-10-06 11:35:20.245911] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2350931 ] 00:40:22.799 [2024-10-06 11:35:20.301756] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:22.799 [2024-10-06 11:35:20.341656] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:22.799 [2024-10-06 11:35:20.341658] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:23.059 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:23.059 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:40:23.059 11:35:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:23.059 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:23.059 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:23.059 11:35:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:23.059 11:35:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:23.059 11:35:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:23.059 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:23.059 11:35:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:23.059 11:35:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:23.059 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:23.059 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:23.059 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:23.059 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:23.059 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:23.059 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:23.059 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:23.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:23.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:23.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:23.059 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:23.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:23.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:23.059 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:23.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:23.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:23.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:23.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:23.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:23.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:23.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:23.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:23.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:23.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:23.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:23.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:23.059 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:23.059 ' 00:40:25.596 [2024-10-06 11:35:22.945771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:26.973 [2024-10-06 11:35:24.165794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:28.877 [2024-10-06 11:35:26.412583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:30.781 [2024-10-06 11:35:28.342553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:32.685 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:32.685 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:32.685 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:32.685 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:32.685 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:32.685 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:32.685 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:32.685 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:32.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:32.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:32.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:32.685 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:32.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:32.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:32.685 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:32.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:32.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:32.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:32.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:32.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:32.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:32.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:32.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:32.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:32.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:32.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:32.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:32.685 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:32.685 11:35:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:32.685 11:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:32.685 11:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:32.685 11:35:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:32.685 11:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:32.685 11:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:32.685 11:35:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:32.685 11:35:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:32.945 11:35:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:32.945 11:35:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:32.945 11:35:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:32.945 11:35:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:32.945 11:35:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:32.945 11:35:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:32.945 11:35:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:32.945 11:35:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:32.945 11:35:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:32.945 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:32.945 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:32.945 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:32.945 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:32.945 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:32.945 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:32.945 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:32.945 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:32.945 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:32.945 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:32.945 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:32.945 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:32.945 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:32.945 ' 00:40:38.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:38.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:38.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:38.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:38.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:38.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:38.220 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:38.220 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:38.220 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:38.220 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:38.220 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:38.220 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:38.220 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:38.220 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:38.221 11:35:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:38.221 11:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:38.221 11:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:38.221 11:35:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2350931 00:40:38.221 11:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2350931 ']' 00:40:38.221 11:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2350931 00:40:38.221 11:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:40:38.221 11:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:38.221 11:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2350931 00:40:38.221 11:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:38.221 11:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:38.221 11:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2350931' 00:40:38.221 killing process with pid 2350931 00:40:38.221 11:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2350931 00:40:38.221 11:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2350931 00:40:38.480 11:35:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:38.480 11:35:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:38.480 11:35:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2350931 ']' 00:40:38.480 11:35:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2350931 00:40:38.480 11:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2350931 ']' 00:40:38.480 11:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2350931 00:40:38.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2350931) - No such process 00:40:38.480 11:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 2350931 is not found' 00:40:38.480 Process with pid 2350931 is not found 00:40:38.480 11:35:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:38.480 11:35:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:38.480 11:35:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:38.480 00:40:38.480 real 0m15.836s 00:40:38.480 user 0m32.965s 00:40:38.480 sys 0m0.655s 00:40:38.480 11:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:38.480 11:35:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:38.480 ************************************ 00:40:38.480 END TEST spdkcli_nvmf_tcp 00:40:38.480 ************************************ 00:40:38.480 11:35:35 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:38.480 11:35:35 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:38.480 11:35:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:38.480 11:35:35 -- common/autotest_common.sh@10 -- # set +x 00:40:38.480 ************************************ 00:40:38.480 START TEST nvmf_identify_passthru 00:40:38.480 ************************************ 00:40:38.480 11:35:35 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:38.480 * Looking for test storage... 00:40:38.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:38.480 11:35:35 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:38.480 11:35:35 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:40:38.480 11:35:35 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:38.480 11:35:36 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:38.480 11:35:36 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:38.481 11:35:36 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:38.481 11:35:36 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:38.481 11:35:36 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:38.481 11:35:36 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:38.481 11:35:36 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:38.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.481 --rc genhtml_branch_coverage=1 00:40:38.481 --rc genhtml_function_coverage=1 00:40:38.481 --rc genhtml_legend=1 00:40:38.481 --rc geninfo_all_blocks=1 00:40:38.481 --rc geninfo_unexecuted_blocks=1 00:40:38.481 00:40:38.481 ' 00:40:38.481 11:35:36 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:38.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.481 --rc genhtml_branch_coverage=1 00:40:38.481 --rc genhtml_function_coverage=1 00:40:38.481 --rc genhtml_legend=1 00:40:38.481 --rc geninfo_all_blocks=1 00:40:38.481 --rc geninfo_unexecuted_blocks=1 00:40:38.481 00:40:38.481 ' 00:40:38.481 11:35:36 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:38.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.481 --rc genhtml_branch_coverage=1 00:40:38.481 --rc genhtml_function_coverage=1 00:40:38.481 --rc genhtml_legend=1 00:40:38.481 --rc geninfo_all_blocks=1 00:40:38.481 --rc geninfo_unexecuted_blocks=1 00:40:38.481 00:40:38.481 ' 00:40:38.481 11:35:36 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:38.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.481 --rc genhtml_branch_coverage=1 00:40:38.481 --rc genhtml_function_coverage=1 00:40:38.481 --rc genhtml_legend=1 00:40:38.481 --rc geninfo_all_blocks=1 00:40:38.481 --rc geninfo_unexecuted_blocks=1 00:40:38.481 00:40:38.481 ' 00:40:38.481 11:35:36 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:38.481 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:38.481 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:38.481 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:38.481 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:38.481 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:38.481 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:38.481 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:38.481 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:38.746 11:35:36 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:38.746 11:35:36 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:38.746 11:35:36 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:38.746 11:35:36 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:38.746 11:35:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.746 11:35:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.746 11:35:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.746 11:35:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:38.746 11:35:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:38.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:38.746 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:38.746 11:35:36 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:38.746 11:35:36 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:38.747 11:35:36 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:38.747 11:35:36 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:38.747 11:35:36 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:38.747 11:35:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.747 11:35:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.747 11:35:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.747 11:35:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:38.747 11:35:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.747 11:35:36 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:38.747 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:38.747 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:38.747 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:38.747 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:38.747 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:38.747 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:38.747 11:35:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:38.747 11:35:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:38.747 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:38.747 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:38.747 11:35:36 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:38.747 11:35:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:44.019 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:44.020 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:44.020 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:44.020 Found net devices under 0000:af:00.0: cvl_0_0 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:44.020 Found net devices under 0000:af:00.1: cvl_0_1 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:44.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:44.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:40:44.020 00:40:44.020 --- 10.0.0.2 ping statistics --- 00:40:44.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:44.020 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:44.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:44.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:40:44.020 00:40:44.020 --- 10.0.0.1 ping statistics --- 00:40:44.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:44.020 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:44.020 11:35:41 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:44.020 11:35:41 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:44.020 11:35:41 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:44.020 11:35:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:44.020 11:35:41 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:44.020 11:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:40:44.020 11:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:40:44.020 11:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:40:44.020 11:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:40:44.020 11:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:40:44.020 11:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:40:44.020 11:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:44.020 11:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:44.020 11:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:40:44.020 11:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:40:44.020 11:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:40:44.020 11:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:5e:00.0 00:40:44.020 11:35:41 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:40:44.020 11:35:41 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:40:44.020 11:35:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:40:44.020 11:35:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:44.020 11:35:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:48.206 11:35:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:40:48.206 11:35:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:48.206 11:35:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:48.206 11:35:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:40:52.391 11:35:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:40:52.391 11:35:49 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:52.391 11:35:49 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:52.391 11:35:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:52.391 11:35:49 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:52.391 11:35:49 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:52.391 11:35:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:52.391 11:35:49 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2357793 00:40:52.391 11:35:49 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:52.391 11:35:49 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:52.391 11:35:49 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2357793 00:40:52.391 11:35:49 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 2357793 ']' 00:40:52.391 11:35:49 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:52.391 11:35:49 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:52.391 11:35:49 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:52.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:52.391 11:35:49 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:52.391 11:35:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:52.391 [2024-10-06 11:35:49.820577] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:40:52.391 [2024-10-06 11:35:49.820621] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:52.391 [2024-10-06 11:35:49.878492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:52.391 [2024-10-06 11:35:49.918637] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:52.391 [2024-10-06 11:35:49.918680] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:52.391 [2024-10-06 11:35:49.918688] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:52.391 [2024-10-06 11:35:49.918694] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:52.391 [2024-10-06 11:35:49.918698] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:52.391 [2024-10-06 11:35:49.920158] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:52.391 [2024-10-06 11:35:49.920180] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:40:52.391 [2024-10-06 11:35:49.920274] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:40:52.391 [2024-10-06 11:35:49.920275] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:52.650 11:35:49 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:52.650 11:35:49 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:40:52.650 11:35:49 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:52.650 11:35:49 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:52.650 11:35:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:52.650 INFO: Log level set to 20 00:40:52.650 INFO: Requests: 00:40:52.650 { 00:40:52.650 "jsonrpc": "2.0", 00:40:52.650 "method": "nvmf_set_config", 00:40:52.650 "id": 1, 00:40:52.650 "params": { 00:40:52.650 "admin_cmd_passthru": { 00:40:52.650 "identify_ctrlr": true 00:40:52.650 } 00:40:52.650 } 00:40:52.650 } 00:40:52.650 00:40:52.650 INFO: response: 00:40:52.650 { 00:40:52.650 "jsonrpc": "2.0", 00:40:52.650 "id": 1, 00:40:52.650 "result": true 00:40:52.650 } 00:40:52.650 00:40:52.650 11:35:49 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:52.650 11:35:49 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:52.650 11:35:49 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:52.650 11:35:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:52.650 INFO: Setting log level to 20 00:40:52.650 INFO: Setting log level to 20 00:40:52.650 INFO: Log level set to 20 00:40:52.650 INFO: Log level set to 20 00:40:52.650 INFO: Requests: 00:40:52.650 { 00:40:52.650 "jsonrpc": "2.0", 00:40:52.650 "method": "framework_start_init", 00:40:52.650 "id": 1 00:40:52.650 } 00:40:52.650 00:40:52.650 INFO: Requests: 00:40:52.650 { 00:40:52.650 "jsonrpc": "2.0", 00:40:52.650 "method": "framework_start_init", 00:40:52.650 "id": 1 00:40:52.650 } 00:40:52.650 00:40:52.650 [2024-10-06 11:35:50.068196] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:52.650 INFO: response: 00:40:52.650 { 00:40:52.650 "jsonrpc": "2.0", 00:40:52.650 "id": 1, 00:40:52.650 "result": true 00:40:52.650 } 00:40:52.650 00:40:52.650 INFO: response: 00:40:52.650 { 00:40:52.650 "jsonrpc": "2.0", 00:40:52.650 "id": 1, 00:40:52.650 "result": true 00:40:52.650 } 00:40:52.650 00:40:52.650 11:35:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:52.650 11:35:50 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:52.650 11:35:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:52.650 11:35:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:52.650 INFO: Setting log level to 40 00:40:52.650 INFO: Setting log level to 40 00:40:52.650 INFO: Setting log level to 40 00:40:52.650 [2024-10-06 11:35:50.081688] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:52.650 11:35:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:52.650 11:35:50 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:52.651 11:35:50 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:52.651 11:35:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:52.651 11:35:50 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:40:52.651 11:35:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:52.651 11:35:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:55.932 Nvme0n1 00:40:55.932 11:35:52 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.932 11:35:52 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:55.932 11:35:52 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.932 11:35:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:55.932 11:35:52 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.932 11:35:52 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:55.932 11:35:52 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.932 11:35:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:55.932 11:35:52 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.932 11:35:52 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:55.932 11:35:52 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.932 11:35:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:55.932 [2024-10-06 11:35:52.984973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:55.932 11:35:52 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.932 11:35:52 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:55.932 11:35:52 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.932 11:35:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:55.932 [ 00:40:55.932 { 00:40:55.932 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:55.932 "subtype": "Discovery", 00:40:55.932 "listen_addresses": [], 00:40:55.932 "allow_any_host": true, 00:40:55.932 "hosts": [] 00:40:55.932 }, 00:40:55.932 { 00:40:55.932 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:55.932 "subtype": "NVMe", 00:40:55.932 "listen_addresses": [ 00:40:55.932 { 00:40:55.932 "trtype": "TCP", 00:40:55.932 "adrfam": "IPv4", 00:40:55.932 "traddr": "10.0.0.2", 00:40:55.932 "trsvcid": "4420" 00:40:55.932 } 00:40:55.932 ], 00:40:55.932 "allow_any_host": true, 00:40:55.932 "hosts": [], 00:40:55.932 "serial_number": "SPDK00000000000001", 00:40:55.932 "model_number": "SPDK bdev Controller", 00:40:55.932 "max_namespaces": 1, 00:40:55.932 "min_cntlid": 1, 00:40:55.932 "max_cntlid": 65519, 00:40:55.932 "namespaces": [ 00:40:55.932 { 00:40:55.932 "nsid": 1, 00:40:55.932 "bdev_name": "Nvme0n1", 00:40:55.932 "name": "Nvme0n1", 00:40:55.933 "nguid": "4BA1C66969AE4DE1BAF243F16442DFF3", 00:40:55.933 "uuid": "4ba1c669-69ae-4de1-baf2-43f16442dff3" 00:40:55.933 } 00:40:55.933 ] 00:40:55.933 } 00:40:55.933 ] 00:40:55.933 11:35:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.933 11:35:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:55.933 11:35:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:55.933 11:35:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:55.933 11:35:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:40:55.933 11:35:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:55.933 11:35:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:55.933 11:35:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:55.933 11:35:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:40:55.933 11:35:53 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:40:55.933 11:35:53 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:40:55.933 11:35:53 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:55.933 11:35:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.933 11:35:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:55.933 11:35:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.933 11:35:53 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:55.933 11:35:53 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:55.933 11:35:53 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:55.933 11:35:53 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:40:55.933 11:35:53 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:55.933 11:35:53 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:40:55.933 11:35:53 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:55.933 11:35:53 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:55.933 rmmod nvme_tcp 00:40:55.933 rmmod nvme_fabrics 00:40:55.933 rmmod nvme_keyring 00:40:55.933 11:35:53 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:55.933 11:35:53 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:40:55.933 11:35:53 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:40:55.933 11:35:53 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 2357793 ']' 00:40:55.933 11:35:53 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 2357793 00:40:55.933 11:35:53 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 2357793 ']' 00:40:55.933 11:35:53 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 2357793 00:40:55.933 11:35:53 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:40:55.933 11:35:53 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:55.933 11:35:53 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2357793 00:40:55.933 11:35:53 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:55.933 11:35:53 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:55.933 11:35:53 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2357793' 00:40:55.933 killing process with pid 2357793 00:40:55.933 11:35:53 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 2357793 00:40:55.933 11:35:53 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 2357793 00:40:57.308 11:35:54 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:57.308 11:35:54 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:57.308 11:35:54 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:57.308 11:35:54 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:40:57.308 11:35:54 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:57.308 11:35:54 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:40:57.308 11:35:54 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:40:57.308 11:35:54 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:57.308 11:35:54 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:57.308 11:35:54 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:57.308 11:35:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:57.308 11:35:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:59.841 11:35:56 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:59.841 00:40:59.841 real 0m21.036s 00:40:59.841 user 0m26.981s 00:40:59.841 sys 0m4.785s 00:40:59.841 11:35:56 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:59.841 11:35:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:59.841 ************************************ 00:40:59.841 END TEST nvmf_identify_passthru 00:40:59.841 ************************************ 00:40:59.841 11:35:56 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:59.841 11:35:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:59.841 11:35:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:59.841 11:35:56 -- common/autotest_common.sh@10 -- # set +x 00:40:59.841 ************************************ 00:40:59.841 START TEST nvmf_dif 00:40:59.841 ************************************ 00:40:59.841 11:35:56 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:59.841 * Looking for test storage... 00:40:59.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:59.841 11:35:57 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:59.841 11:35:57 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:40:59.841 11:35:57 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:59.841 11:35:57 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:59.841 11:35:57 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:59.841 11:35:57 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:59.841 11:35:57 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:59.841 11:35:57 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:40:59.841 11:35:57 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:40:59.841 11:35:57 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:40:59.841 11:35:57 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:40:59.841 11:35:57 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:40:59.841 11:35:57 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:40:59.841 11:35:57 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:40:59.841 11:35:57 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:59.841 11:35:57 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:40:59.841 11:35:57 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:40:59.841 11:35:57 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:59.841 11:35:57 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:59.841 11:35:57 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:40:59.842 11:35:57 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:40:59.842 11:35:57 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:59.842 11:35:57 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:40:59.842 11:35:57 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:40:59.842 11:35:57 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:40:59.842 11:35:57 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:40:59.842 11:35:57 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:59.842 11:35:57 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:40:59.842 11:35:57 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:40:59.842 11:35:57 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:59.842 11:35:57 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:59.842 11:35:57 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:40:59.842 11:35:57 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:59.842 11:35:57 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:59.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:59.842 --rc genhtml_branch_coverage=1 00:40:59.842 --rc genhtml_function_coverage=1 00:40:59.842 --rc genhtml_legend=1 00:40:59.842 --rc geninfo_all_blocks=1 00:40:59.842 --rc geninfo_unexecuted_blocks=1 00:40:59.842 00:40:59.842 ' 00:40:59.842 11:35:57 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:59.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:59.842 --rc genhtml_branch_coverage=1 00:40:59.842 --rc genhtml_function_coverage=1 00:40:59.842 --rc genhtml_legend=1 00:40:59.842 --rc geninfo_all_blocks=1 00:40:59.842 --rc geninfo_unexecuted_blocks=1 00:40:59.842 00:40:59.842 ' 00:40:59.842 11:35:57 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:59.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:59.842 --rc genhtml_branch_coverage=1 00:40:59.842 --rc genhtml_function_coverage=1 00:40:59.842 --rc genhtml_legend=1 00:40:59.842 --rc geninfo_all_blocks=1 00:40:59.842 --rc geninfo_unexecuted_blocks=1 00:40:59.842 00:40:59.842 ' 00:40:59.842 11:35:57 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:59.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:59.842 --rc genhtml_branch_coverage=1 00:40:59.842 --rc genhtml_function_coverage=1 00:40:59.842 --rc genhtml_legend=1 00:40:59.842 --rc geninfo_all_blocks=1 00:40:59.842 --rc geninfo_unexecuted_blocks=1 00:40:59.842 00:40:59.842 ' 00:40:59.842 11:35:57 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:59.842 11:35:57 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:40:59.842 11:35:57 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:59.842 11:35:57 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:59.842 11:35:57 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:59.842 11:35:57 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:59.842 11:35:57 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:59.842 11:35:57 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:59.842 11:35:57 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:59.842 11:35:57 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:59.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:59.842 11:35:57 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:59.842 11:35:57 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:59.842 11:35:57 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:59.842 11:35:57 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:59.842 11:35:57 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:59.842 11:35:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:59.842 11:35:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:59.842 11:35:57 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:40:59.842 11:35:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:05.110 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:05.110 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:05.110 Found net devices under 0000:af:00.0: cvl_0_0 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:05.110 Found net devices under 0000:af:00.1: cvl_0_1 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:05.110 11:36:01 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:05.111 11:36:01 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:05.111 11:36:01 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:05.111 11:36:01 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:05.111 11:36:01 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:05.111 11:36:01 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:05.111 11:36:01 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:05.111 11:36:01 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:05.111 11:36:01 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:05.111 11:36:01 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:05.111 11:36:01 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:05.111 11:36:01 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:05.111 11:36:01 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:05.111 11:36:02 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:05.111 11:36:02 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:05.111 11:36:02 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:05.111 11:36:02 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:05.111 11:36:02 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:05.111 11:36:02 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:05.111 11:36:02 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:05.111 11:36:02 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:05.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:05.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:41:05.111 00:41:05.111 --- 10.0.0.2 ping statistics --- 00:41:05.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:05.111 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:41:05.111 11:36:02 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:05.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:05.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:41:05.111 00:41:05.111 --- 10.0.0.1 ping statistics --- 00:41:05.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:05.111 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:41:05.111 11:36:02 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:05.111 11:36:02 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:41:05.111 11:36:02 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:41:05.111 11:36:02 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:07.014 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:41:07.014 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:41:07.014 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:41:07.014 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:41:07.273 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:41:07.273 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:41:07.273 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:41:07.273 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:41:07.273 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:41:07.273 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:41:07.273 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:41:07.273 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:41:07.273 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:41:07.273 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:41:07.273 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:41:07.273 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:41:07.273 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:41:07.273 11:36:04 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:07.273 11:36:04 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:07.273 11:36:04 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:07.273 11:36:04 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:07.273 11:36:04 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:07.273 11:36:04 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:07.273 11:36:04 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:07.273 11:36:04 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:07.273 11:36:04 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:07.273 11:36:04 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:07.273 11:36:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:07.273 11:36:04 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=2363069 00:41:07.273 11:36:04 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 2363069 00:41:07.273 11:36:04 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:07.273 11:36:04 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 2363069 ']' 00:41:07.273 11:36:04 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:07.274 11:36:04 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:07.274 11:36:04 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:07.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:07.274 11:36:04 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:07.274 11:36:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:07.532 [2024-10-06 11:36:04.862903] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:41:07.532 [2024-10-06 11:36:04.862946] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:07.532 [2024-10-06 11:36:04.919794] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.532 [2024-10-06 11:36:04.958848] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:07.532 [2024-10-06 11:36:04.958888] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:07.532 [2024-10-06 11:36:04.958896] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:07.532 [2024-10-06 11:36:04.958901] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:07.532 [2024-10-06 11:36:04.958907] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:07.532 [2024-10-06 11:36:04.959432] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:41:07.532 11:36:05 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:07.532 11:36:05 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:41:07.532 11:36:05 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:07.532 11:36:05 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:07.532 11:36:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:07.532 11:36:05 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:07.532 11:36:05 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:07.532 11:36:05 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:07.532 11:36:05 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.532 11:36:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:07.532 [2024-10-06 11:36:05.100602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:07.532 11:36:05 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.532 11:36:05 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:07.532 11:36:05 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:07.532 11:36:05 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:07.532 11:36:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:07.791 ************************************ 00:41:07.791 START TEST fio_dif_1_default 00:41:07.791 ************************************ 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:07.791 bdev_null0 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:07.791 [2024-10-06 11:36:05.168912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:07.791 { 00:41:07.791 "params": { 00:41:07.791 "name": "Nvme$subsystem", 00:41:07.791 "trtype": "$TEST_TRANSPORT", 00:41:07.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:07.791 "adrfam": "ipv4", 00:41:07.791 "trsvcid": "$NVMF_PORT", 00:41:07.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:07.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:07.791 "hdgst": ${hdgst:-false}, 00:41:07.791 "ddgst": ${ddgst:-false} 00:41:07.791 }, 00:41:07.791 "method": "bdev_nvme_attach_controller" 00:41:07.791 } 00:41:07.791 EOF 00:41:07.791 )") 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:41:07.791 11:36:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:07.791 "params": { 00:41:07.791 "name": "Nvme0", 00:41:07.791 "trtype": "tcp", 00:41:07.791 "traddr": "10.0.0.2", 00:41:07.791 "adrfam": "ipv4", 00:41:07.791 "trsvcid": "4420", 00:41:07.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:07.791 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:07.791 "hdgst": false, 00:41:07.791 "ddgst": false 00:41:07.791 }, 00:41:07.792 "method": "bdev_nvme_attach_controller" 00:41:07.792 }' 00:41:07.792 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:07.792 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:07.792 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:07.792 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:07.792 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:07.792 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:07.792 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:07.792 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:07.792 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:07.792 11:36:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:08.050 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:08.050 fio-3.35 00:41:08.050 Starting 1 thread 00:41:20.263 00:41:20.263 filename0: (groupid=0, jobs=1): err= 0: pid=2363429: Sun Oct 6 11:36:16 2024 00:41:20.263 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10021msec) 00:41:20.263 slat (nsec): min=5756, max=27507, avg=6217.43, stdev=1116.24 00:41:20.263 clat (usec): min=40820, max=45259, avg=41049.73, stdev=344.17 00:41:20.263 lat (usec): min=40826, max=45287, avg=41055.95, stdev=344.57 00:41:20.263 clat percentiles (usec): 00:41:20.263 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:20.263 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:20.263 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:41:20.263 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:41:20.263 | 99.99th=[45351] 00:41:20.263 bw ( KiB/s): min= 384, max= 416, per=99.59%, avg=388.80, stdev=11.72, samples=20 00:41:20.263 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:41:20.263 lat (msec) : 50=100.00% 00:41:20.263 cpu : usr=92.30%, sys=7.45%, ctx=18, majf=0, minf=0 00:41:20.263 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:20.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.263 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:20.263 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:20.263 00:41:20.263 Run status group 0 (all jobs): 00:41:20.263 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10021-10021msec 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.263 00:41:20.263 real 0m11.055s 00:41:20.263 user 0m16.026s 00:41:20.263 sys 0m1.021s 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:20.263 ************************************ 00:41:20.263 END TEST fio_dif_1_default 00:41:20.263 ************************************ 00:41:20.263 11:36:16 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:20.263 11:36:16 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:20.263 11:36:16 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:20.263 11:36:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:20.263 ************************************ 00:41:20.263 START TEST fio_dif_1_multi_subsystems 00:41:20.263 ************************************ 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.263 bdev_null0 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.263 [2024-10-06 11:36:16.283469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.263 bdev_null1 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.263 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:20.264 { 00:41:20.264 "params": { 00:41:20.264 "name": "Nvme$subsystem", 00:41:20.264 "trtype": "$TEST_TRANSPORT", 00:41:20.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:20.264 "adrfam": "ipv4", 00:41:20.264 "trsvcid": "$NVMF_PORT", 00:41:20.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:20.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:20.264 "hdgst": ${hdgst:-false}, 00:41:20.264 "ddgst": ${ddgst:-false} 00:41:20.264 }, 00:41:20.264 "method": "bdev_nvme_attach_controller" 00:41:20.264 } 00:41:20.264 EOF 00:41:20.264 )") 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:20.264 { 00:41:20.264 "params": { 00:41:20.264 "name": "Nvme$subsystem", 00:41:20.264 "trtype": "$TEST_TRANSPORT", 00:41:20.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:20.264 "adrfam": "ipv4", 00:41:20.264 "trsvcid": "$NVMF_PORT", 00:41:20.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:20.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:20.264 "hdgst": ${hdgst:-false}, 00:41:20.264 "ddgst": ${ddgst:-false} 00:41:20.264 }, 00:41:20.264 "method": "bdev_nvme_attach_controller" 00:41:20.264 } 00:41:20.264 EOF 00:41:20.264 )") 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:20.264 "params": { 00:41:20.264 "name": "Nvme0", 00:41:20.264 "trtype": "tcp", 00:41:20.264 "traddr": "10.0.0.2", 00:41:20.264 "adrfam": "ipv4", 00:41:20.264 "trsvcid": "4420", 00:41:20.264 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:20.264 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:20.264 "hdgst": false, 00:41:20.264 "ddgst": false 00:41:20.264 }, 00:41:20.264 "method": "bdev_nvme_attach_controller" 00:41:20.264 },{ 00:41:20.264 "params": { 00:41:20.264 "name": "Nvme1", 00:41:20.264 "trtype": "tcp", 00:41:20.264 "traddr": "10.0.0.2", 00:41:20.264 "adrfam": "ipv4", 00:41:20.264 "trsvcid": "4420", 00:41:20.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:20.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:20.264 "hdgst": false, 00:41:20.264 "ddgst": false 00:41:20.264 }, 00:41:20.264 "method": "bdev_nvme_attach_controller" 00:41:20.264 }' 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:20.264 11:36:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.264 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:20.264 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:20.264 fio-3.35 00:41:20.264 Starting 2 threads 00:41:30.245 00:41:30.245 filename0: (groupid=0, jobs=1): err= 0: pid=2365733: Sun Oct 6 11:36:27 2024 00:41:30.245 read: IOPS=96, BW=388KiB/s (397kB/s)(3888KiB/10026msec) 00:41:30.245 slat (nsec): min=5883, max=34037, avg=10839.55, stdev=7273.78 00:41:30.245 clat (usec): min=40795, max=42071, avg=41224.40, stdev=431.58 00:41:30.245 lat (usec): min=40804, max=42094, avg=41235.24, stdev=431.63 00:41:30.245 clat percentiles (usec): 00:41:30.245 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:41:30.245 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:30.245 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:41:30.245 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:30.245 | 99.99th=[42206] 00:41:30.245 bw ( KiB/s): min= 352, max= 416, per=49.92%, avg=387.20, stdev=14.31, samples=20 00:41:30.245 iops : min= 88, max= 104, avg=96.80, stdev= 3.58, samples=20 00:41:30.245 lat (msec) : 50=100.00% 00:41:30.245 cpu : usr=98.93%, sys=0.79%, ctx=15, majf=0, minf=100 00:41:30.245 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:30.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.245 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.245 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:30.245 filename1: (groupid=0, jobs=1): err= 0: pid=2365734: Sun Oct 6 11:36:27 2024 00:41:30.245 read: IOPS=96, BW=388KiB/s (397kB/s)(3888KiB/10030msec) 00:41:30.245 slat (nsec): min=6065, max=67464, avg=12105.82, stdev=9462.39 00:41:30.245 clat (usec): min=40832, max=42919, avg=41231.93, stdev=448.32 00:41:30.245 lat (usec): min=40839, max=42934, avg=41244.03, stdev=448.46 00:41:30.245 clat percentiles (usec): 00:41:30.245 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:30.245 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:30.245 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:41:30.245 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:41:30.245 | 99.99th=[42730] 00:41:30.245 bw ( KiB/s): min= 352, max= 416, per=49.92%, avg=387.20, stdev=14.31, samples=20 00:41:30.245 iops : min= 88, max= 104, avg=96.80, stdev= 3.58, samples=20 00:41:30.245 lat (msec) : 50=100.00% 00:41:30.245 cpu : usr=97.34%, sys=2.34%, ctx=47, majf=0, minf=140 00:41:30.245 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:30.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.245 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.245 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:30.245 00:41:30.245 Run status group 0 (all jobs): 00:41:30.245 READ: bw=775KiB/s (794kB/s), 388KiB/s-388KiB/s (397kB/s-397kB/s), io=7776KiB (7963kB), run=10026-10030msec 00:41:30.245 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:30.245 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:30.245 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:30.245 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:30.245 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:30.245 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:30.245 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.245 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.245 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.245 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:30.246 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.246 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.246 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.246 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:30.246 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:30.246 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:30.246 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:30.246 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.246 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.246 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.246 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:30.246 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.246 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.246 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.246 00:41:30.246 real 0m11.210s 00:41:30.246 user 0m26.816s 00:41:30.246 sys 0m0.667s 00:41:30.246 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:30.246 11:36:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.246 ************************************ 00:41:30.246 END TEST fio_dif_1_multi_subsystems 00:41:30.246 ************************************ 00:41:30.246 11:36:27 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:30.246 11:36:27 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:30.246 11:36:27 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:30.246 11:36:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:30.246 ************************************ 00:41:30.246 START TEST fio_dif_rand_params 00:41:30.246 ************************************ 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.246 bdev_null0 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.246 [2024-10-06 11:36:27.572266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:30.246 { 00:41:30.246 "params": { 00:41:30.246 "name": "Nvme$subsystem", 00:41:30.246 "trtype": "$TEST_TRANSPORT", 00:41:30.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:30.246 "adrfam": "ipv4", 00:41:30.246 "trsvcid": "$NVMF_PORT", 00:41:30.246 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:30.246 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:30.246 "hdgst": ${hdgst:-false}, 00:41:30.246 "ddgst": ${ddgst:-false} 00:41:30.246 }, 00:41:30.246 "method": "bdev_nvme_attach_controller" 00:41:30.246 } 00:41:30.246 EOF 00:41:30.246 )") 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:30.246 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:30.247 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:30.247 11:36:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:41:30.247 11:36:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:41:30.247 11:36:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:30.247 "params": { 00:41:30.247 "name": "Nvme0", 00:41:30.247 "trtype": "tcp", 00:41:30.247 "traddr": "10.0.0.2", 00:41:30.247 "adrfam": "ipv4", 00:41:30.247 "trsvcid": "4420", 00:41:30.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:30.247 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:30.247 "hdgst": false, 00:41:30.247 "ddgst": false 00:41:30.247 }, 00:41:30.247 "method": "bdev_nvme_attach_controller" 00:41:30.247 }' 00:41:30.247 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:30.247 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:30.247 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:30.247 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:30.247 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:30.247 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:30.247 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:30.247 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:30.247 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:30.247 11:36:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:30.506 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:30.506 ... 00:41:30.506 fio-3.35 00:41:30.506 Starting 3 threads 00:41:35.870 00:41:35.870 filename0: (groupid=0, jobs=1): err= 0: pid=2367624: Sun Oct 6 11:36:33 2024 00:41:35.870 read: IOPS=292, BW=36.5MiB/s (38.3MB/s)(183MiB/5004msec) 00:41:35.870 slat (nsec): min=6154, max=29680, avg=9700.32, stdev=2644.36 00:41:35.870 clat (usec): min=3501, max=51684, avg=10248.72, stdev=11761.58 00:41:35.870 lat (usec): min=3508, max=51697, avg=10258.42, stdev=11761.86 00:41:35.870 clat percentiles (usec): 00:41:35.870 | 1.00th=[ 3785], 5.00th=[ 4015], 10.00th=[ 4359], 20.00th=[ 5342], 00:41:35.870 | 30.00th=[ 5866], 40.00th=[ 6259], 50.00th=[ 6587], 60.00th=[ 7111], 00:41:35.870 | 70.00th=[ 7832], 80.00th=[ 8848], 90.00th=[10290], 95.00th=[46924], 00:41:35.870 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:41:35.870 | 99.99th=[51643] 00:41:35.870 bw ( KiB/s): min=25344, max=46592, per=35.10%, avg=37745.78, stdev=6723.49, samples=9 00:41:35.870 iops : min= 198, max= 364, avg=294.89, stdev=52.53, samples=9 00:41:35.870 lat (msec) : 4=4.58%, 10=84.21%, 20=2.39%, 50=8.27%, 100=0.55% 00:41:35.870 cpu : usr=94.64%, sys=5.04%, ctx=6, majf=0, minf=0 00:41:35.870 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:35.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.870 issued rwts: total=1463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:35.870 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:35.870 filename0: (groupid=0, jobs=1): err= 0: pid=2367625: Sun Oct 6 11:36:33 2024 00:41:35.870 read: IOPS=286, BW=35.8MiB/s (37.5MB/s)(180MiB/5023msec) 00:41:35.870 slat (nsec): min=6123, max=46205, avg=9512.42, stdev=2884.85 00:41:35.870 clat (usec): min=3555, max=88032, avg=10465.28, stdev=12003.70 00:41:35.870 lat (usec): min=3561, max=88043, avg=10474.79, stdev=12003.96 00:41:35.870 clat percentiles (usec): 00:41:35.870 | 1.00th=[ 4015], 5.00th=[ 4228], 10.00th=[ 4424], 20.00th=[ 5014], 00:41:35.870 | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6718], 60.00th=[ 7242], 00:41:35.870 | 70.00th=[ 8356], 80.00th=[ 9372], 90.00th=[11469], 95.00th=[47973], 00:41:35.870 | 99.00th=[50070], 99.50th=[51119], 99.90th=[52167], 99.95th=[87557], 00:41:35.870 | 99.99th=[87557] 00:41:35.870 bw ( KiB/s): min=27136, max=53504, per=34.16%, avg=36736.00, stdev=8095.66, samples=10 00:41:35.870 iops : min= 212, max= 418, avg=287.00, stdev=63.25, samples=10 00:41:35.870 lat (msec) : 4=0.76%, 10=83.59%, 20=6.95%, 50=7.37%, 100=1.32% 00:41:35.870 cpu : usr=94.03%, sys=5.68%, ctx=9, majf=0, minf=9 00:41:35.870 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:35.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.870 issued rwts: total=1438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:35.870 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:35.870 filename0: (groupid=0, jobs=1): err= 0: pid=2367626: Sun Oct 6 11:36:33 2024 00:41:35.870 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(165MiB/5005msec) 00:41:35.870 slat (nsec): min=6093, max=25603, avg=9878.69, stdev=2769.87 00:41:35.870 clat (usec): min=3637, max=89522, avg=11366.65, stdev=13003.41 00:41:35.870 lat (usec): min=3645, max=89534, avg=11376.53, stdev=13003.55 00:41:35.870 clat percentiles (usec): 00:41:35.870 | 1.00th=[ 4015], 5.00th=[ 4359], 10.00th=[ 4555], 20.00th=[ 5800], 00:41:35.870 | 30.00th=[ 6325], 40.00th=[ 6718], 50.00th=[ 7111], 60.00th=[ 7832], 00:41:35.870 | 70.00th=[ 8848], 80.00th=[ 9765], 90.00th=[12649], 95.00th=[47973], 00:41:35.870 | 99.00th=[50594], 99.50th=[51643], 99.90th=[88605], 99.95th=[89654], 00:41:35.870 | 99.99th=[89654] 00:41:35.870 bw ( KiB/s): min=27904, max=41728, per=31.35%, avg=33715.20, stdev=4692.64, samples=10 00:41:35.870 iops : min= 218, max= 326, avg=263.40, stdev=36.66, samples=10 00:41:35.870 lat (msec) : 4=0.91%, 10=81.20%, 20=8.04%, 50=8.42%, 100=1.44% 00:41:35.870 cpu : usr=94.28%, sys=5.40%, ctx=8, majf=0, minf=0 00:41:35.870 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:35.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.870 issued rwts: total=1319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:35.870 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:35.870 00:41:35.870 Run status group 0 (all jobs): 00:41:35.870 READ: bw=105MiB/s (110MB/s), 32.9MiB/s-36.5MiB/s (34.5MB/s-38.3MB/s), io=528MiB (553MB), run=5004-5023msec 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.131 bdev_null0 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.131 [2024-10-06 11:36:33.635937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.131 bdev_null1 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.131 bdev_null2 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.131 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:36.132 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.132 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.132 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:36.392 { 00:41:36.392 "params": { 00:41:36.392 "name": "Nvme$subsystem", 00:41:36.392 "trtype": "$TEST_TRANSPORT", 00:41:36.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:36.392 "adrfam": "ipv4", 00:41:36.392 "trsvcid": "$NVMF_PORT", 00:41:36.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:36.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:36.392 "hdgst": ${hdgst:-false}, 00:41:36.392 "ddgst": ${ddgst:-false} 00:41:36.392 }, 00:41:36.392 "method": "bdev_nvme_attach_controller" 00:41:36.392 } 00:41:36.392 EOF 00:41:36.392 )") 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:36.392 { 00:41:36.392 "params": { 00:41:36.392 "name": "Nvme$subsystem", 00:41:36.392 "trtype": "$TEST_TRANSPORT", 00:41:36.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:36.392 "adrfam": "ipv4", 00:41:36.392 "trsvcid": "$NVMF_PORT", 00:41:36.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:36.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:36.392 "hdgst": ${hdgst:-false}, 00:41:36.392 "ddgst": ${ddgst:-false} 00:41:36.392 }, 00:41:36.392 "method": "bdev_nvme_attach_controller" 00:41:36.392 } 00:41:36.392 EOF 00:41:36.392 )") 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:36.392 11:36:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:36.392 { 00:41:36.392 "params": { 00:41:36.392 "name": "Nvme$subsystem", 00:41:36.392 "trtype": "$TEST_TRANSPORT", 00:41:36.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:36.392 "adrfam": "ipv4", 00:41:36.392 "trsvcid": "$NVMF_PORT", 00:41:36.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:36.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:36.393 "hdgst": ${hdgst:-false}, 00:41:36.393 "ddgst": ${ddgst:-false} 00:41:36.393 }, 00:41:36.393 "method": "bdev_nvme_attach_controller" 00:41:36.393 } 00:41:36.393 EOF 00:41:36.393 )") 00:41:36.393 11:36:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:36.393 11:36:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:41:36.393 11:36:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:41:36.393 11:36:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:36.393 "params": { 00:41:36.393 "name": "Nvme0", 00:41:36.393 "trtype": "tcp", 00:41:36.393 "traddr": "10.0.0.2", 00:41:36.393 "adrfam": "ipv4", 00:41:36.393 "trsvcid": "4420", 00:41:36.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:36.393 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:36.393 "hdgst": false, 00:41:36.393 "ddgst": false 00:41:36.393 }, 00:41:36.393 "method": "bdev_nvme_attach_controller" 00:41:36.393 },{ 00:41:36.393 "params": { 00:41:36.393 "name": "Nvme1", 00:41:36.393 "trtype": "tcp", 00:41:36.393 "traddr": "10.0.0.2", 00:41:36.393 "adrfam": "ipv4", 00:41:36.393 "trsvcid": "4420", 00:41:36.393 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:36.393 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:36.393 "hdgst": false, 00:41:36.393 "ddgst": false 00:41:36.393 }, 00:41:36.393 "method": "bdev_nvme_attach_controller" 00:41:36.393 },{ 00:41:36.393 "params": { 00:41:36.393 "name": "Nvme2", 00:41:36.393 "trtype": "tcp", 00:41:36.393 "traddr": "10.0.0.2", 00:41:36.393 "adrfam": "ipv4", 00:41:36.393 "trsvcid": "4420", 00:41:36.393 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:36.393 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:36.393 "hdgst": false, 00:41:36.393 "ddgst": false 00:41:36.393 }, 00:41:36.393 "method": "bdev_nvme_attach_controller" 00:41:36.393 }' 00:41:36.393 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:36.393 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:36.393 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:36.393 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:36.393 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:36.393 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:36.393 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:36.393 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:36.393 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:36.393 11:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:36.653 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:36.653 ... 00:41:36.653 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:36.653 ... 00:41:36.653 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:36.653 ... 00:41:36.653 fio-3.35 00:41:36.653 Starting 24 threads 00:41:48.851 00:41:48.851 filename0: (groupid=0, jobs=1): err= 0: pid=2368675: Sun Oct 6 11:36:45 2024 00:41:48.851 read: IOPS=531, BW=2124KiB/s (2175kB/s)(20.8MiB/10003msec) 00:41:48.851 slat (usec): min=7, max=105, avg=43.09, stdev=20.70 00:41:48.851 clat (usec): min=5844, max=34647, avg=29742.55, stdev=1822.03 00:41:48.851 lat (usec): min=5854, max=34692, avg=29785.64, stdev=1824.99 00:41:48.851 clat percentiles (usec): 00:41:48.851 | 1.00th=[24511], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:41:48.851 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:41:48.851 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:41:48.851 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:41:48.851 | 99.99th=[34866] 00:41:48.851 bw ( KiB/s): min= 2043, max= 2304, per=4.24%, avg=2121.32, stdev=77.60, samples=19 00:41:48.851 iops : min= 510, max= 576, avg=530.21, stdev=19.39, samples=19 00:41:48.851 lat (msec) : 10=0.30%, 20=0.60%, 50=99.10% 00:41:48.851 cpu : usr=98.39%, sys=1.19%, ctx=31, majf=0, minf=30 00:41:48.851 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:48.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.851 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.851 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.851 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.851 filename0: (groupid=0, jobs=1): err= 0: pid=2368676: Sun Oct 6 11:36:45 2024 00:41:48.851 read: IOPS=519, BW=2078KiB/s (2128kB/s)(20.6MiB/10131msec) 00:41:48.851 slat (nsec): min=4731, max=93085, avg=26169.81, stdev=12192.56 00:41:48.851 clat (msec): min=27, max=169, avg=30.54, stdev= 7.65 00:41:48.851 lat (msec): min=27, max=169, avg=30.57, stdev= 7.65 00:41:48.851 clat percentiles (msec): 00:41:48.851 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:41:48.851 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:41:48.851 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:41:48.851 | 99.00th=[ 32], 99.50th=[ 47], 99.90th=[ 167], 99.95th=[ 169], 00:41:48.851 | 99.99th=[ 169] 00:41:48.851 bw ( KiB/s): min= 1912, max= 2176, per=4.20%, avg=2098.30, stdev=77.06, samples=20 00:41:48.851 iops : min= 478, max= 544, avg=524.50, stdev=19.19, samples=20 00:41:48.851 lat (msec) : 50=99.70%, 250=0.30% 00:41:48.851 cpu : usr=96.51%, sys=1.98%, ctx=76, majf=0, minf=30 00:41:48.852 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:48.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.852 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.852 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.852 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.852 filename0: (groupid=0, jobs=1): err= 0: pid=2368677: Sun Oct 6 11:36:45 2024 00:41:48.852 read: IOPS=518, BW=2073KiB/s (2123kB/s)(20.5MiB/10125msec) 00:41:48.852 slat (usec): min=7, max=103, avg=42.27, stdev=21.14 00:41:48.852 clat (msec): min=28, max=169, avg=30.43, stdev= 7.98 00:41:48.852 lat (msec): min=28, max=169, avg=30.47, stdev= 7.98 00:41:48.852 clat percentiles (msec): 00:41:48.852 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:41:48.852 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:41:48.852 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:41:48.852 | 99.00th=[ 32], 99.50th=[ 69], 99.90th=[ 169], 99.95th=[ 169], 00:41:48.852 | 99.99th=[ 169] 00:41:48.852 bw ( KiB/s): min= 1916, max= 2176, per=4.19%, avg=2092.35, stdev=86.04, samples=20 00:41:48.852 iops : min= 479, max= 544, avg=523.05, stdev=21.48, samples=20 00:41:48.852 lat (msec) : 50=99.39%, 100=0.30%, 250=0.30% 00:41:48.852 cpu : usr=98.55%, sys=1.03%, ctx=14, majf=0, minf=21 00:41:48.852 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:48.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.852 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.852 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.852 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.852 filename0: (groupid=0, jobs=1): err= 0: pid=2368678: Sun Oct 6 11:36:45 2024 00:41:48.852 read: IOPS=521, BW=2085KiB/s (2135kB/s)(20.7MiB/10160msec) 00:41:48.852 slat (nsec): min=6891, max=94614, avg=37453.33, stdev=20764.17 00:41:48.852 clat (msec): min=11, max=167, avg=30.34, stdev= 7.61 00:41:48.852 lat (msec): min=11, max=167, avg=30.38, stdev= 7.61 00:41:48.852 clat percentiles (msec): 00:41:48.852 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:41:48.852 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:41:48.852 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:41:48.852 | 99.00th=[ 32], 99.50th=[ 33], 99.90th=[ 167], 99.95th=[ 167], 00:41:48.852 | 99.99th=[ 167] 00:41:48.852 bw ( KiB/s): min= 1924, max= 2176, per=4.21%, avg=2105.55, stdev=77.12, samples=20 00:41:48.852 iops : min= 481, max= 544, avg=526.35, stdev=19.31, samples=20 00:41:48.852 lat (msec) : 20=0.34%, 50=99.36%, 250=0.30% 00:41:48.852 cpu : usr=98.50%, sys=1.07%, ctx=12, majf=0, minf=31 00:41:48.852 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:48.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.852 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.852 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.852 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.852 filename0: (groupid=0, jobs=1): err= 0: pid=2368679: Sun Oct 6 11:36:45 2024 00:41:48.852 read: IOPS=523, BW=2095KiB/s (2146kB/s)(20.7MiB/10125msec) 00:41:48.852 slat (usec): min=5, max=109, avg=40.09, stdev=22.59 00:41:48.852 clat (msec): min=14, max=169, avg=30.16, stdev= 8.13 00:41:48.852 lat (msec): min=14, max=169, avg=30.20, stdev= 8.13 00:41:48.852 clat percentiles (msec): 00:41:48.852 | 1.00th=[ 18], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 30], 00:41:48.852 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:41:48.852 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:41:48.852 | 99.00th=[ 42], 99.50th=[ 56], 99.90th=[ 169], 99.95th=[ 169], 00:41:48.852 | 99.99th=[ 169] 00:41:48.852 bw ( KiB/s): min= 1912, max= 2331, per=4.23%, avg=2114.45, stdev=106.53, samples=20 00:41:48.852 iops : min= 478, max= 582, avg=528.50, stdev=26.60, samples=20 00:41:48.852 lat (msec) : 20=1.36%, 50=98.04%, 100=0.30%, 250=0.30% 00:41:48.852 cpu : usr=98.52%, sys=1.07%, ctx=15, majf=0, minf=24 00:41:48.852 IO depths : 1=5.2%, 2=10.6%, 4=22.1%, 8=54.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:41:48.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.852 complete : 0=0.0%, 4=93.4%, 8=1.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.852 issued rwts: total=5304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.852 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.852 filename0: (groupid=0, jobs=1): err= 0: pid=2368680: Sun Oct 6 11:36:45 2024 00:41:48.852 read: IOPS=521, BW=2085KiB/s (2136kB/s)(20.7MiB/10158msec) 00:41:48.852 slat (nsec): min=5985, max=92353, avg=35153.85, stdev=20125.08 00:41:48.852 clat (msec): min=15, max=168, avg=30.33, stdev= 7.61 00:41:48.852 lat (msec): min=15, max=168, avg=30.36, stdev= 7.61 00:41:48.852 clat percentiles (msec): 00:41:48.852 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:41:48.852 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:41:48.852 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:41:48.852 | 99.00th=[ 32], 99.50th=[ 32], 99.90th=[ 167], 99.95th=[ 167], 00:41:48.852 | 99.99th=[ 169] 00:41:48.852 bw ( KiB/s): min= 1935, max= 2180, per=4.21%, avg=2105.55, stdev=75.72, samples=20 00:41:48.852 iops : min= 483, max= 545, avg=526.20, stdev=19.02, samples=20 00:41:48.852 lat (msec) : 20=0.30%, 50=99.40%, 250=0.30% 00:41:48.852 cpu : usr=98.45%, sys=1.13%, ctx=11, majf=0, minf=28 00:41:48.852 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:48.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.852 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.852 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.852 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.852 filename0: (groupid=0, jobs=1): err= 0: pid=2368681: Sun Oct 6 11:36:45 2024 00:41:48.852 read: IOPS=519, BW=2077KiB/s (2127kB/s)(20.6MiB/10138msec) 00:41:48.852 slat (usec): min=5, max=106, avg=45.92, stdev=20.35 00:41:48.852 clat (msec): min=28, max=171, avg=30.36, stdev= 7.75 00:41:48.852 lat (msec): min=28, max=171, avg=30.41, stdev= 7.75 00:41:48.852 clat percentiles (msec): 00:41:48.852 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:41:48.852 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:41:48.852 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:41:48.852 | 99.00th=[ 32], 99.50th=[ 50], 99.90th=[ 169], 99.95th=[ 169], 00:41:48.852 | 99.99th=[ 171] 00:41:48.852 bw ( KiB/s): min= 1882, max= 2176, per=4.19%, avg=2096.80, stdev=91.46, samples=20 00:41:48.852 iops : min= 470, max= 544, avg=524.10, stdev=22.92, samples=20 00:41:48.852 lat (msec) : 50=99.70%, 250=0.30% 00:41:48.852 cpu : usr=98.15%, sys=1.43%, ctx=21, majf=0, minf=33 00:41:48.852 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:48.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.852 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.852 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.852 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.852 filename0: (groupid=0, jobs=1): err= 0: pid=2368682: Sun Oct 6 11:36:45 2024 00:41:48.852 read: IOPS=521, BW=2085KiB/s (2135kB/s)(20.7MiB/10160msec) 00:41:48.852 slat (nsec): min=5985, max=94404, avg=37159.18, stdev=20525.30 00:41:48.852 clat (msec): min=15, max=168, avg=30.32, stdev= 7.60 00:41:48.852 lat (msec): min=15, max=168, avg=30.35, stdev= 7.60 00:41:48.852 clat percentiles (msec): 00:41:48.852 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:41:48.852 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:41:48.852 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:41:48.852 | 99.00th=[ 32], 99.50th=[ 32], 99.90th=[ 167], 99.95th=[ 167], 00:41:48.852 | 99.99th=[ 169] 00:41:48.852 bw ( KiB/s): min= 1924, max= 2180, per=4.21%, avg=2105.75, stdev=77.32, samples=20 00:41:48.852 iops : min= 481, max= 545, avg=526.40, stdev=19.36, samples=20 00:41:48.852 lat (msec) : 20=0.30%, 50=99.40%, 250=0.30% 00:41:48.852 cpu : usr=98.51%, sys=1.06%, ctx=15, majf=0, minf=28 00:41:48.852 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:48.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.852 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.852 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.852 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.852 filename1: (groupid=0, jobs=1): err= 0: pid=2368683: Sun Oct 6 11:36:45 2024 00:41:48.852 read: IOPS=521, BW=2085KiB/s (2135kB/s)(20.7MiB/10160msec) 00:41:48.852 slat (nsec): min=6752, max=77488, avg=16942.49, stdev=11405.15 00:41:48.852 clat (msec): min=15, max=167, avg=30.55, stdev= 7.59 00:41:48.852 lat (msec): min=15, max=167, avg=30.57, stdev= 7.58 00:41:48.852 clat percentiles (msec): 00:41:48.852 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:41:48.852 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:41:48.852 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:41:48.852 | 99.00th=[ 32], 99.50th=[ 33], 99.90th=[ 167], 99.95th=[ 167], 00:41:48.852 | 99.99th=[ 167] 00:41:48.852 bw ( KiB/s): min= 1924, max= 2176, per=4.21%, avg=2105.55, stdev=77.12, samples=20 00:41:48.852 iops : min= 481, max= 544, avg=526.35, stdev=19.31, samples=20 00:41:48.852 lat (msec) : 20=0.30%, 50=99.40%, 250=0.30% 00:41:48.852 cpu : usr=97.72%, sys=1.58%, ctx=155, majf=0, minf=36 00:41:48.852 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:48.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.852 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.852 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.852 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.852 filename1: (groupid=0, jobs=1): err= 0: pid=2368684: Sun Oct 6 11:36:45 2024 00:41:48.852 read: IOPS=523, BW=2095KiB/s (2145kB/s)(20.8MiB/10154msec) 00:41:48.852 slat (usec): min=5, max=108, avg=46.43, stdev=20.73 00:41:48.852 clat (msec): min=15, max=168, avg=30.11, stdev= 7.79 00:41:48.853 lat (msec): min=15, max=168, avg=30.15, stdev= 7.79 00:41:48.853 clat percentiles (msec): 00:41:48.853 | 1.00th=[ 22], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:41:48.853 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:41:48.853 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:41:48.853 | 99.00th=[ 31], 99.50th=[ 33], 99.90th=[ 169], 99.95th=[ 169], 00:41:48.853 | 99.99th=[ 169] 00:41:48.853 bw ( KiB/s): min= 1935, max= 2352, per=4.23%, avg=2114.15, stdev=92.89, samples=20 00:41:48.853 iops : min= 483, max= 588, avg=528.35, stdev=23.37, samples=20 00:41:48.853 lat (msec) : 20=0.86%, 50=98.83%, 250=0.30% 00:41:48.853 cpu : usr=98.56%, sys=1.00%, ctx=22, majf=0, minf=30 00:41:48.853 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:41:48.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.853 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.853 issued rwts: total=5318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.853 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.853 filename1: (groupid=0, jobs=1): err= 0: pid=2368685: Sun Oct 6 11:36:45 2024 00:41:48.853 read: IOPS=522, BW=2091KiB/s (2141kB/s)(20.7MiB/10130msec) 00:41:48.853 slat (usec): min=4, max=105, avg=32.29, stdev=21.64 00:41:48.853 clat (msec): min=16, max=169, avg=30.32, stdev= 8.09 00:41:48.853 lat (msec): min=16, max=169, avg=30.35, stdev= 8.09 00:41:48.853 clat percentiles (msec): 00:41:48.853 | 1.00th=[ 22], 5.00th=[ 25], 10.00th=[ 30], 20.00th=[ 30], 00:41:48.853 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:41:48.853 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:41:48.853 | 99.00th=[ 42], 99.50th=[ 45], 99.90th=[ 169], 99.95th=[ 169], 00:41:48.853 | 99.99th=[ 169] 00:41:48.853 bw ( KiB/s): min= 1897, max= 2240, per=4.22%, avg=2110.35, stdev=84.90, samples=20 00:41:48.853 iops : min= 474, max= 560, avg=527.50, stdev=21.19, samples=20 00:41:48.853 lat (msec) : 20=0.59%, 50=99.07%, 100=0.04%, 250=0.30% 00:41:48.853 cpu : usr=98.70%, sys=0.88%, ctx=15, majf=0, minf=33 00:41:48.853 IO depths : 1=3.8%, 2=7.7%, 4=16.5%, 8=61.7%, 16=10.3%, 32=0.0%, >=64=0.0% 00:41:48.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.853 complete : 0=0.0%, 4=92.2%, 8=3.7%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.853 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.853 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.853 filename1: (groupid=0, jobs=1): err= 0: pid=2368686: Sun Oct 6 11:36:45 2024 00:41:48.853 read: IOPS=524, BW=2098KiB/s (2149kB/s)(20.6MiB/10076msec) 00:41:48.853 slat (nsec): min=5513, max=94312, avg=25653.78, stdev=17307.65 00:41:48.853 clat (msec): min=15, max=100, avg=30.25, stdev= 4.09 00:41:48.853 lat (msec): min=15, max=100, avg=30.27, stdev= 4.09 00:41:48.853 clat percentiles (msec): 00:41:48.853 | 1.00th=[ 28], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:41:48.853 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:41:48.853 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:41:48.853 | 99.00th=[ 32], 99.50th=[ 36], 99.90th=[ 101], 99.95th=[ 101], 00:41:48.853 | 99.99th=[ 102] 00:41:48.853 bw ( KiB/s): min= 1935, max= 2180, per=4.20%, avg=2101.55, stdev=73.22, samples=20 00:41:48.853 iops : min= 483, max= 545, avg=525.20, stdev=18.40, samples=20 00:41:48.853 lat (msec) : 20=0.30%, 50=99.39%, 250=0.30% 00:41:48.853 cpu : usr=98.60%, sys=0.97%, ctx=12, majf=0, minf=38 00:41:48.853 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:48.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.853 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.853 issued rwts: total=5286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.853 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.853 filename1: (groupid=0, jobs=1): err= 0: pid=2368687: Sun Oct 6 11:36:45 2024 00:41:48.853 read: IOPS=519, BW=2077KiB/s (2127kB/s)(20.6MiB/10137msec) 00:41:48.853 slat (nsec): min=5979, max=96834, avg=36345.99, stdev=20485.01 00:41:48.853 clat (msec): min=27, max=167, avg=30.43, stdev= 7.70 00:41:48.853 lat (msec): min=27, max=167, avg=30.47, stdev= 7.70 00:41:48.853 clat percentiles (msec): 00:41:48.853 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:41:48.853 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:41:48.853 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:41:48.853 | 99.00th=[ 32], 99.50th=[ 55], 99.90th=[ 167], 99.95th=[ 167], 00:41:48.853 | 99.99th=[ 169] 00:41:48.853 bw ( KiB/s): min= 1923, max= 2176, per=4.19%, avg=2096.80, stdev=78.10, samples=20 00:41:48.853 iops : min= 480, max= 544, avg=524.05, stdev=19.65, samples=20 00:41:48.853 lat (msec) : 50=99.39%, 100=0.30%, 250=0.30% 00:41:48.853 cpu : usr=98.35%, sys=1.22%, ctx=13, majf=0, minf=31 00:41:48.853 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:48.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.853 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.853 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.853 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.853 filename1: (groupid=0, jobs=1): err= 0: pid=2368688: Sun Oct 6 11:36:45 2024 00:41:48.853 read: IOPS=519, BW=2078KiB/s (2127kB/s)(20.6MiB/10135msec) 00:41:48.853 slat (usec): min=7, max=106, avg=45.64, stdev=20.63 00:41:48.853 clat (msec): min=24, max=169, avg=30.36, stdev= 7.76 00:41:48.853 lat (msec): min=24, max=169, avg=30.40, stdev= 7.76 00:41:48.853 clat percentiles (msec): 00:41:48.853 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:41:48.853 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:41:48.853 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:41:48.853 | 99.00th=[ 32], 99.50th=[ 50], 99.90th=[ 169], 99.95th=[ 169], 00:41:48.853 | 99.99th=[ 169] 00:41:48.853 bw ( KiB/s): min= 1882, max= 2176, per=4.19%, avg=2096.80, stdev=91.46, samples=20 00:41:48.853 iops : min= 470, max= 544, avg=524.10, stdev=22.92, samples=20 00:41:48.853 lat (msec) : 50=99.62%, 100=0.08%, 250=0.30% 00:41:48.853 cpu : usr=98.51%, sys=1.07%, ctx=13, majf=0, minf=36 00:41:48.853 IO depths : 1=6.2%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:48.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.853 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.853 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.853 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.853 filename1: (groupid=0, jobs=1): err= 0: pid=2368689: Sun Oct 6 11:36:45 2024 00:41:48.853 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10009msec) 00:41:48.853 slat (usec): min=4, max=106, avg=38.45, stdev=24.32 00:41:48.853 clat (usec): min=12033, max=35796, avg=29838.67, stdev=1632.21 00:41:48.853 lat (usec): min=12041, max=35805, avg=29877.12, stdev=1633.04 00:41:48.853 clat percentiles (usec): 00:41:48.853 | 1.00th=[24511], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:41:48.853 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:41:48.853 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:41:48.853 | 99.00th=[31065], 99.50th=[31589], 99.90th=[34341], 99.95th=[34866], 00:41:48.853 | 99.99th=[35914] 00:41:48.853 bw ( KiB/s): min= 2048, max= 2304, per=4.25%, avg=2122.11, stdev=77.69, samples=19 00:41:48.853 iops : min= 512, max= 576, avg=530.53, stdev=19.42, samples=19 00:41:48.853 lat (msec) : 20=0.90%, 50=99.10% 00:41:48.853 cpu : usr=98.14%, sys=1.44%, ctx=28, majf=0, minf=39 00:41:48.853 IO depths : 1=6.1%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:48.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.853 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.853 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.853 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.853 filename1: (groupid=0, jobs=1): err= 0: pid=2368690: Sun Oct 6 11:36:45 2024 00:41:48.853 read: IOPS=523, BW=2096KiB/s (2146kB/s)(20.6MiB/10078msec) 00:41:48.853 slat (nsec): min=7371, max=71468, avg=14885.02, stdev=7213.89 00:41:48.853 clat (msec): min=20, max=100, avg=30.40, stdev= 3.93 00:41:48.853 lat (msec): min=20, max=100, avg=30.41, stdev= 3.93 00:41:48.853 clat percentiles (msec): 00:41:48.853 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:41:48.853 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:41:48.853 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:41:48.853 | 99.00th=[ 32], 99.50th=[ 38], 99.90th=[ 101], 99.95th=[ 101], 00:41:48.853 | 99.99th=[ 101] 00:41:48.853 bw ( KiB/s): min= 1924, max= 2176, per=4.20%, avg=2099.15, stdev=75.83, samples=20 00:41:48.853 iops : min= 481, max= 544, avg=524.75, stdev=18.92, samples=20 00:41:48.853 lat (msec) : 50=99.70%, 250=0.30% 00:41:48.853 cpu : usr=98.47%, sys=1.11%, ctx=14, majf=0, minf=38 00:41:48.853 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:48.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.853 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.853 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.853 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.853 filename2: (groupid=0, jobs=1): err= 0: pid=2368691: Sun Oct 6 11:36:45 2024 00:41:48.853 read: IOPS=527, BW=2110KiB/s (2161kB/s)(20.8MiB/10099msec) 00:41:48.853 slat (nsec): min=4535, max=92277, avg=13487.54, stdev=11346.77 00:41:48.853 clat (msec): min=12, max=100, avg=30.21, stdev= 4.22 00:41:48.853 lat (msec): min=12, max=100, avg=30.23, stdev= 4.22 00:41:48.853 clat percentiles (msec): 00:41:48.853 | 1.00th=[ 19], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:41:48.853 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:41:48.853 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:41:48.853 | 99.00th=[ 32], 99.50th=[ 32], 99.90th=[ 101], 99.95th=[ 101], 00:41:48.853 | 99.99th=[ 101] 00:41:48.853 bw ( KiB/s): min= 2048, max= 2304, per=4.25%, avg=2124.80, stdev=76.58, samples=20 00:41:48.853 iops : min= 512, max= 576, avg=531.20, stdev=19.14, samples=20 00:41:48.853 lat (msec) : 20=1.20%, 50=98.50%, 250=0.30% 00:41:48.853 cpu : usr=98.22%, sys=1.35%, ctx=23, majf=0, minf=47 00:41:48.853 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:48.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.854 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.854 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.854 filename2: (groupid=0, jobs=1): err= 0: pid=2368692: Sun Oct 6 11:36:45 2024 00:41:48.854 read: IOPS=520, BW=2081KiB/s (2131kB/s)(20.6MiB/10147msec) 00:41:48.854 slat (usec): min=9, max=108, avg=46.23, stdev=20.45 00:41:48.854 clat (msec): min=24, max=169, avg=30.30, stdev= 7.67 00:41:48.854 lat (msec): min=24, max=169, avg=30.35, stdev= 7.67 00:41:48.854 clat percentiles (msec): 00:41:48.854 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:41:48.854 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:41:48.854 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:41:48.854 | 99.00th=[ 32], 99.50th=[ 37], 99.90th=[ 169], 99.95th=[ 169], 00:41:48.854 | 99.99th=[ 169] 00:41:48.854 bw ( KiB/s): min= 1965, max= 2176, per=4.20%, avg=2100.70, stdev=71.20, samples=20 00:41:48.854 iops : min= 491, max= 544, avg=525.05, stdev=17.78, samples=20 00:41:48.854 lat (msec) : 50=99.70%, 250=0.30% 00:41:48.854 cpu : usr=98.66%, sys=0.91%, ctx=16, majf=0, minf=44 00:41:48.854 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:48.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.854 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.854 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.854 filename2: (groupid=0, jobs=1): err= 0: pid=2368693: Sun Oct 6 11:36:45 2024 00:41:48.854 read: IOPS=518, BW=2073KiB/s (2123kB/s)(20.5MiB/10124msec) 00:41:48.854 slat (usec): min=8, max=110, avg=43.76, stdev=20.41 00:41:48.854 clat (msec): min=28, max=169, avg=30.42, stdev= 7.98 00:41:48.854 lat (msec): min=28, max=169, avg=30.47, stdev= 7.98 00:41:48.854 clat percentiles (msec): 00:41:48.854 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:41:48.854 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:41:48.854 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:41:48.854 | 99.00th=[ 32], 99.50th=[ 68], 99.90th=[ 169], 99.95th=[ 169], 00:41:48.854 | 99.99th=[ 169] 00:41:48.854 bw ( KiB/s): min= 1920, max= 2176, per=4.19%, avg=2092.55, stdev=85.62, samples=20 00:41:48.854 iops : min= 480, max= 544, avg=523.10, stdev=21.37, samples=20 00:41:48.854 lat (msec) : 50=99.39%, 100=0.30%, 250=0.30% 00:41:48.854 cpu : usr=98.64%, sys=0.95%, ctx=14, majf=0, minf=32 00:41:48.854 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:48.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.854 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.854 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.854 filename2: (groupid=0, jobs=1): err= 0: pid=2368694: Sun Oct 6 11:36:45 2024 00:41:48.854 read: IOPS=521, BW=2085KiB/s (2135kB/s)(20.7MiB/10161msec) 00:41:48.854 slat (nsec): min=7763, max=94744, avg=38197.34, stdev=20547.35 00:41:48.854 clat (msec): min=15, max=168, avg=30.32, stdev= 7.60 00:41:48.854 lat (msec): min=15, max=168, avg=30.36, stdev= 7.60 00:41:48.854 clat percentiles (msec): 00:41:48.854 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:41:48.854 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:41:48.854 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:41:48.854 | 99.00th=[ 32], 99.50th=[ 33], 99.90th=[ 167], 99.95th=[ 167], 00:41:48.854 | 99.99th=[ 169] 00:41:48.854 bw ( KiB/s): min= 1924, max= 2176, per=4.21%, avg=2105.55, stdev=77.12, samples=20 00:41:48.854 iops : min= 481, max= 544, avg=526.35, stdev=19.31, samples=20 00:41:48.854 lat (msec) : 20=0.30%, 50=99.40%, 250=0.30% 00:41:48.854 cpu : usr=98.42%, sys=1.15%, ctx=12, majf=0, minf=36 00:41:48.854 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:48.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.854 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.854 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.854 filename2: (groupid=0, jobs=1): err= 0: pid=2368695: Sun Oct 6 11:36:45 2024 00:41:48.854 read: IOPS=520, BW=2080KiB/s (2130kB/s)(20.6MiB/10125msec) 00:41:48.854 slat (usec): min=4, max=109, avg=44.02, stdev=21.26 00:41:48.854 clat (msec): min=21, max=169, avg=30.32, stdev= 7.82 00:41:48.854 lat (msec): min=21, max=169, avg=30.37, stdev= 7.82 00:41:48.854 clat percentiles (msec): 00:41:48.854 | 1.00th=[ 26], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:41:48.854 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:41:48.854 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:41:48.854 | 99.00th=[ 34], 99.50th=[ 47], 99.90th=[ 169], 99.95th=[ 169], 00:41:48.854 | 99.99th=[ 169] 00:41:48.854 bw ( KiB/s): min= 1912, max= 2176, per=4.20%, avg=2099.30, stdev=76.51, samples=20 00:41:48.854 iops : min= 478, max= 544, avg=524.75, stdev=19.05, samples=20 00:41:48.854 lat (msec) : 50=99.70%, 250=0.30% 00:41:48.854 cpu : usr=98.64%, sys=0.94%, ctx=14, majf=0, minf=38 00:41:48.854 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:48.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.854 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.854 issued rwts: total=5266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.854 filename2: (groupid=0, jobs=1): err= 0: pid=2368696: Sun Oct 6 11:36:45 2024 00:41:48.854 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10008msec) 00:41:48.854 slat (nsec): min=4527, max=64573, avg=28342.18, stdev=10054.84 00:41:48.854 clat (usec): min=11498, max=34090, avg=29896.89, stdev=1613.55 00:41:48.854 lat (usec): min=11502, max=34125, avg=29925.23, stdev=1615.08 00:41:48.854 clat percentiles (usec): 00:41:48.854 | 1.00th=[24511], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:41:48.854 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:41:48.854 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:41:48.854 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:41:48.854 | 99.99th=[34341] 00:41:48.854 bw ( KiB/s): min= 2048, max= 2304, per=4.25%, avg=2122.11, stdev=77.69, samples=19 00:41:48.854 iops : min= 512, max= 576, avg=530.53, stdev=19.42, samples=19 00:41:48.854 lat (msec) : 20=0.90%, 50=99.10% 00:41:48.854 cpu : usr=97.59%, sys=1.45%, ctx=139, majf=0, minf=40 00:41:48.854 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:48.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.854 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.854 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.854 filename2: (groupid=0, jobs=1): err= 0: pid=2368697: Sun Oct 6 11:36:45 2024 00:41:48.854 read: IOPS=526, BW=2104KiB/s (2155kB/s)(20.8MiB/10127msec) 00:41:48.854 slat (usec): min=6, max=104, avg=21.46, stdev=19.54 00:41:48.854 clat (msec): min=11, max=135, avg=30.23, stdev= 6.73 00:41:48.854 lat (msec): min=11, max=135, avg=30.25, stdev= 6.73 00:41:48.854 clat percentiles (msec): 00:41:48.854 | 1.00th=[ 22], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 29], 00:41:48.854 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:41:48.854 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 35], 95.00th=[ 38], 00:41:48.854 | 99.00th=[ 43], 99.50th=[ 56], 99.90th=[ 136], 99.95th=[ 136], 00:41:48.854 | 99.99th=[ 136] 00:41:48.854 bw ( KiB/s): min= 1916, max= 2256, per=4.25%, avg=2124.35, stdev=70.89, samples=20 00:41:48.854 iops : min= 479, max= 564, avg=531.05, stdev=17.72, samples=20 00:41:48.854 lat (msec) : 20=0.38%, 50=99.02%, 100=0.30%, 250=0.30% 00:41:48.854 cpu : usr=98.37%, sys=1.15%, ctx=66, majf=0, minf=36 00:41:48.854 IO depths : 1=0.5%, 2=1.1%, 4=4.3%, 8=78.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:41:48.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.854 complete : 0=0.0%, 4=89.4%, 8=8.6%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.854 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.854 filename2: (groupid=0, jobs=1): err= 0: pid=2368698: Sun Oct 6 11:36:45 2024 00:41:48.854 read: IOPS=521, BW=2085KiB/s (2135kB/s)(20.6MiB/10129msec) 00:41:48.854 slat (usec): min=7, max=105, avg=21.71, stdev=20.33 00:41:48.854 clat (msec): min=15, max=170, avg=30.60, stdev= 8.31 00:41:48.854 lat (msec): min=15, max=170, avg=30.62, stdev= 8.31 00:41:48.854 clat percentiles (msec): 00:41:48.854 | 1.00th=[ 22], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 29], 00:41:48.854 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:41:48.854 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 36], 95.00th=[ 39], 00:41:48.854 | 99.00th=[ 49], 99.50th=[ 56], 99.90th=[ 169], 99.95th=[ 171], 00:41:48.854 | 99.99th=[ 171] 00:41:48.854 bw ( KiB/s): min= 1840, max= 2160, per=4.21%, avg=2105.35, stdev=80.32, samples=20 00:41:48.854 iops : min= 460, max= 540, avg=526.30, stdev=20.06, samples=20 00:41:48.854 lat (msec) : 20=0.55%, 50=98.54%, 100=0.61%, 250=0.30% 00:41:48.854 cpu : usr=98.54%, sys=1.04%, ctx=12, majf=0, minf=39 00:41:48.854 IO depths : 1=0.1%, 2=0.4%, 4=3.5%, 8=79.8%, 16=16.3%, 32=0.0%, >=64=0.0% 00:41:48.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.854 complete : 0=0.0%, 4=89.3%, 8=8.8%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.854 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.854 00:41:48.854 Run status group 0 (all jobs): 00:41:48.854 READ: bw=48.8MiB/s (51.2MB/s), 2073KiB/s-2124KiB/s (2123kB/s-2175kB/s), io=496MiB (520MB), run=10003-10161msec 00:41:48.854 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:48.854 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:48.854 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:48.854 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.855 bdev_null0 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.855 [2024-10-06 11:36:45.379401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.855 bdev_null1 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:48.855 { 00:41:48.855 "params": { 00:41:48.855 "name": "Nvme$subsystem", 00:41:48.855 "trtype": "$TEST_TRANSPORT", 00:41:48.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:48.855 "adrfam": "ipv4", 00:41:48.855 "trsvcid": "$NVMF_PORT", 00:41:48.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:48.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:48.855 "hdgst": ${hdgst:-false}, 00:41:48.855 "ddgst": ${ddgst:-false} 00:41:48.855 }, 00:41:48.855 "method": "bdev_nvme_attach_controller" 00:41:48.855 } 00:41:48.855 EOF 00:41:48.855 )") 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:48.855 11:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:48.855 { 00:41:48.855 "params": { 00:41:48.855 "name": "Nvme$subsystem", 00:41:48.855 "trtype": "$TEST_TRANSPORT", 00:41:48.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:48.855 "adrfam": "ipv4", 00:41:48.856 "trsvcid": "$NVMF_PORT", 00:41:48.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:48.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:48.856 "hdgst": ${hdgst:-false}, 00:41:48.856 "ddgst": ${ddgst:-false} 00:41:48.856 }, 00:41:48.856 "method": "bdev_nvme_attach_controller" 00:41:48.856 } 00:41:48.856 EOF 00:41:48.856 )") 00:41:48.856 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:48.856 11:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:48.856 11:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:48.856 11:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:41:48.856 11:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:41:48.856 11:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:48.856 "params": { 00:41:48.856 "name": "Nvme0", 00:41:48.856 "trtype": "tcp", 00:41:48.856 "traddr": "10.0.0.2", 00:41:48.856 "adrfam": "ipv4", 00:41:48.856 "trsvcid": "4420", 00:41:48.856 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:48.856 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:48.856 "hdgst": false, 00:41:48.856 "ddgst": false 00:41:48.856 }, 00:41:48.856 "method": "bdev_nvme_attach_controller" 00:41:48.856 },{ 00:41:48.856 "params": { 00:41:48.856 "name": "Nvme1", 00:41:48.856 "trtype": "tcp", 00:41:48.856 "traddr": "10.0.0.2", 00:41:48.856 "adrfam": "ipv4", 00:41:48.856 "trsvcid": "4420", 00:41:48.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:48.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:48.856 "hdgst": false, 00:41:48.856 "ddgst": false 00:41:48.856 }, 00:41:48.856 "method": "bdev_nvme_attach_controller" 00:41:48.856 }' 00:41:48.856 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:48.856 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:48.856 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:48.856 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:48.856 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:48.856 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:48.856 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:48.856 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:48.856 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:48.856 11:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:48.856 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:48.856 ... 00:41:48.856 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:48.856 ... 00:41:48.856 fio-3.35 00:41:48.856 Starting 4 threads 00:41:54.119 00:41:54.119 filename0: (groupid=0, jobs=1): err= 0: pid=2370589: Sun Oct 6 11:36:51 2024 00:41:54.119 read: IOPS=2645, BW=20.7MiB/s (21.7MB/s)(103MiB/5003msec) 00:41:54.119 slat (nsec): min=6008, max=61841, avg=13538.99, stdev=8706.59 00:41:54.119 clat (usec): min=848, max=42439, avg=2982.81, stdev=1060.94 00:41:54.119 lat (usec): min=871, max=42470, avg=2996.35, stdev=1060.89 00:41:54.119 clat percentiles (usec): 00:41:54.119 | 1.00th=[ 2008], 5.00th=[ 2442], 10.00th=[ 2606], 20.00th=[ 2737], 00:41:54.119 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2933], 00:41:54.119 | 70.00th=[ 2966], 80.00th=[ 3064], 90.00th=[ 3326], 95.00th=[ 3949], 00:41:54.119 | 99.00th=[ 4686], 99.50th=[ 4752], 99.90th=[ 5407], 99.95th=[42206], 00:41:54.119 | 99.99th=[42206] 00:41:54.119 bw ( KiB/s): min=19504, max=22368, per=24.87%, avg=21127.11, stdev=831.35, samples=9 00:41:54.119 iops : min= 2438, max= 2796, avg=2640.89, stdev=103.92, samples=9 00:41:54.119 lat (usec) : 1000=0.02% 00:41:54.119 lat (msec) : 2=0.95%, 4=94.57%, 10=4.40%, 50=0.06% 00:41:54.119 cpu : usr=96.22%, sys=3.44%, ctx=14, majf=0, minf=9 00:41:54.119 IO depths : 1=0.7%, 2=5.3%, 4=66.3%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:54.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.119 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.119 issued rwts: total=13237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:54.119 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:54.119 filename0: (groupid=0, jobs=1): err= 0: pid=2370590: Sun Oct 6 11:36:51 2024 00:41:54.119 read: IOPS=2649, BW=20.7MiB/s (21.7MB/s)(104MiB/5001msec) 00:41:54.119 slat (usec): min=5, max=101, avg=11.41, stdev= 7.82 00:41:54.119 clat (usec): min=655, max=5659, avg=2986.10, stdev=464.38 00:41:54.119 lat (usec): min=662, max=5686, avg=2997.51, stdev=464.59 00:41:54.119 clat percentiles (usec): 00:41:54.119 | 1.00th=[ 1811], 5.00th=[ 2507], 10.00th=[ 2671], 20.00th=[ 2769], 00:41:54.119 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:41:54.119 | 70.00th=[ 2966], 80.00th=[ 3097], 90.00th=[ 3359], 95.00th=[ 4080], 00:41:54.119 | 99.00th=[ 4686], 99.50th=[ 5014], 99.90th=[ 5407], 99.95th=[ 5538], 00:41:54.119 | 99.99th=[ 5669] 00:41:54.119 bw ( KiB/s): min=20416, max=22128, per=24.89%, avg=21137.33, stdev=667.78, samples=9 00:41:54.119 iops : min= 2552, max= 2766, avg=2642.11, stdev=83.39, samples=9 00:41:54.119 lat (usec) : 750=0.02%, 1000=0.05% 00:41:54.119 lat (msec) : 2=1.65%, 4=92.21%, 10=6.07% 00:41:54.119 cpu : usr=95.82%, sys=3.86%, ctx=8, majf=0, minf=9 00:41:54.119 IO depths : 1=0.1%, 2=2.7%, 4=70.5%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:54.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.119 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.119 issued rwts: total=13248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:54.120 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:54.120 filename1: (groupid=0, jobs=1): err= 0: pid=2370591: Sun Oct 6 11:36:51 2024 00:41:54.120 read: IOPS=2598, BW=20.3MiB/s (21.3MB/s)(102MiB/5002msec) 00:41:54.120 slat (nsec): min=5926, max=73711, avg=11642.60, stdev=7915.80 00:41:54.120 clat (usec): min=697, max=44321, avg=3043.72, stdev=1118.68 00:41:54.120 lat (usec): min=704, max=44344, avg=3055.36, stdev=1118.49 00:41:54.120 clat percentiles (usec): 00:41:54.120 | 1.00th=[ 2147], 5.00th=[ 2540], 10.00th=[ 2671], 20.00th=[ 2769], 00:41:54.120 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:41:54.120 | 70.00th=[ 2999], 80.00th=[ 3130], 90.00th=[ 3556], 95.00th=[ 4228], 00:41:54.120 | 99.00th=[ 4555], 99.50th=[ 4752], 99.90th=[ 5014], 99.95th=[44303], 00:41:54.120 | 99.99th=[44303] 00:41:54.120 bw ( KiB/s): min=19232, max=21760, per=24.50%, avg=20810.67, stdev=729.14, samples=9 00:41:54.120 iops : min= 2404, max= 2720, avg=2601.33, stdev=91.14, samples=9 00:41:54.120 lat (usec) : 750=0.02% 00:41:54.120 lat (msec) : 2=0.62%, 4=92.08%, 10=7.22%, 50=0.06% 00:41:54.120 cpu : usr=96.16%, sys=3.46%, ctx=12, majf=0, minf=9 00:41:54.120 IO depths : 1=0.1%, 2=2.7%, 4=70.2%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:54.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.120 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.120 issued rwts: total=12998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:54.120 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:54.120 filename1: (groupid=0, jobs=1): err= 0: pid=2370592: Sun Oct 6 11:36:51 2024 00:41:54.120 read: IOPS=2725, BW=21.3MiB/s (22.3MB/s)(107MiB/5002msec) 00:41:54.120 slat (nsec): min=5934, max=73494, avg=11587.08, stdev=7726.54 00:41:54.120 clat (usec): min=868, max=5381, avg=2898.98, stdev=398.17 00:41:54.120 lat (usec): min=890, max=5395, avg=2910.57, stdev=398.43 00:41:54.120 clat percentiles (usec): 00:41:54.120 | 1.00th=[ 1795], 5.00th=[ 2278], 10.00th=[ 2540], 20.00th=[ 2704], 00:41:54.120 | 30.00th=[ 2769], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2933], 00:41:54.120 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3228], 95.00th=[ 3589], 00:41:54.120 | 99.00th=[ 4555], 99.50th=[ 4621], 99.90th=[ 4883], 99.95th=[ 5014], 00:41:54.120 | 99.99th=[ 5080] 00:41:54.120 bw ( KiB/s): min=20976, max=23374, per=25.68%, avg=21811.33, stdev=771.68, samples=9 00:41:54.120 iops : min= 2622, max= 2921, avg=2726.33, stdev=96.27, samples=9 00:41:54.120 lat (usec) : 1000=0.03% 00:41:54.120 lat (msec) : 2=2.20%, 4=95.34%, 10=2.43% 00:41:54.120 cpu : usr=95.70%, sys=3.96%, ctx=11, majf=0, minf=9 00:41:54.120 IO depths : 1=0.3%, 2=5.7%, 4=67.6%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:54.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.120 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.120 issued rwts: total=13632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:54.120 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:54.120 00:41:54.120 Run status group 0 (all jobs): 00:41:54.120 READ: bw=82.9MiB/s (87.0MB/s), 20.3MiB/s-21.3MiB/s (21.3MB/s-22.3MB/s), io=415MiB (435MB), run=5001-5003msec 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.120 11:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:54.382 11:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.382 00:41:54.382 real 0m24.157s 00:41:54.382 user 4m52.935s 00:41:54.382 sys 0m5.454s 00:41:54.382 11:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:54.382 11:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:54.382 ************************************ 00:41:54.382 END TEST fio_dif_rand_params 00:41:54.382 ************************************ 00:41:54.382 11:36:51 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:54.382 11:36:51 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:54.382 11:36:51 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:54.382 11:36:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:54.382 ************************************ 00:41:54.382 START TEST fio_dif_digest 00:41:54.382 ************************************ 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:54.382 bdev_null0 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:54.382 [2024-10-06 11:36:51.804125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:54.382 { 00:41:54.382 "params": { 00:41:54.382 "name": "Nvme$subsystem", 00:41:54.382 "trtype": "$TEST_TRANSPORT", 00:41:54.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:54.382 "adrfam": "ipv4", 00:41:54.382 "trsvcid": "$NVMF_PORT", 00:41:54.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:54.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:54.382 "hdgst": ${hdgst:-false}, 00:41:54.382 "ddgst": ${ddgst:-false} 00:41:54.382 }, 00:41:54.382 "method": "bdev_nvme_attach_controller" 00:41:54.382 } 00:41:54.382 EOF 00:41:54.382 )") 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:54.382 "params": { 00:41:54.382 "name": "Nvme0", 00:41:54.382 "trtype": "tcp", 00:41:54.382 "traddr": "10.0.0.2", 00:41:54.382 "adrfam": "ipv4", 00:41:54.382 "trsvcid": "4420", 00:41:54.382 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:54.382 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:54.382 "hdgst": true, 00:41:54.382 "ddgst": true 00:41:54.382 }, 00:41:54.382 "method": "bdev_nvme_attach_controller" 00:41:54.382 }' 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:54.382 11:36:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:54.640 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:54.640 ... 00:41:54.640 fio-3.35 00:41:54.640 Starting 3 threads 00:42:06.839 00:42:06.839 filename0: (groupid=0, jobs=1): err= 0: pid=2371627: Sun Oct 6 11:37:02 2024 00:42:06.839 read: IOPS=291, BW=36.4MiB/s (38.1MB/s)(364MiB/10005msec) 00:42:06.839 slat (nsec): min=6320, max=43815, avg=15958.04, stdev=6541.42 00:42:06.839 clat (usec): min=6730, max=13724, avg=10287.37, stdev=779.82 00:42:06.839 lat (usec): min=6740, max=13733, avg=10303.33, stdev=779.91 00:42:06.839 clat percentiles (usec): 00:42:06.839 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9634], 00:42:06.839 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:42:06.839 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11207], 95.00th=[11600], 00:42:06.839 | 99.00th=[12256], 99.50th=[12518], 99.90th=[13304], 99.95th=[13435], 00:42:06.839 | 99.99th=[13698] 00:42:06.839 bw ( KiB/s): min=36352, max=38400, per=34.39%, avg=37248.00, stdev=572.43, samples=20 00:42:06.839 iops : min= 284, max= 300, avg=291.00, stdev= 4.47, samples=20 00:42:06.839 lat (msec) : 10=34.10%, 20=65.90% 00:42:06.839 cpu : usr=94.90%, sys=4.77%, ctx=20, majf=0, minf=54 00:42:06.839 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.839 issued rwts: total=2912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.839 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:06.839 filename0: (groupid=0, jobs=1): err= 0: pid=2371628: Sun Oct 6 11:37:02 2024 00:42:06.839 read: IOPS=280, BW=35.1MiB/s (36.8MB/s)(352MiB/10044msec) 00:42:06.839 slat (nsec): min=6337, max=43914, avg=15943.29, stdev=6698.51 00:42:06.839 clat (usec): min=6616, max=48474, avg=10662.31, stdev=1292.07 00:42:06.839 lat (usec): min=6628, max=48487, avg=10678.25, stdev=1292.06 00:42:06.839 clat percentiles (usec): 00:42:06.839 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:42:06.839 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:42:06.839 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125], 00:42:06.839 | 99.00th=[12780], 99.50th=[13042], 99.90th=[13566], 99.95th=[46924], 00:42:06.839 | 99.99th=[48497] 00:42:06.839 bw ( KiB/s): min=35072, max=37632, per=33.27%, avg=36032.00, stdev=751.54, samples=20 00:42:06.839 iops : min= 274, max= 294, avg=281.50, stdev= 5.87, samples=20 00:42:06.839 lat (msec) : 10=20.55%, 20=79.38%, 50=0.07% 00:42:06.840 cpu : usr=95.11%, sys=4.56%, ctx=16, majf=0, minf=18 00:42:06.840 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.840 issued rwts: total=2817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.840 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:06.840 filename0: (groupid=0, jobs=1): err= 0: pid=2371629: Sun Oct 6 11:37:02 2024 00:42:06.840 read: IOPS=275, BW=34.5MiB/s (36.1MB/s)(346MiB/10044msec) 00:42:06.840 slat (nsec): min=6372, max=72091, avg=19930.80, stdev=7536.88 00:42:06.840 clat (usec): min=8179, max=51942, avg=10838.71, stdev=1844.98 00:42:06.840 lat (usec): min=8207, max=51966, avg=10858.64, stdev=1844.76 00:42:06.840 clat percentiles (usec): 00:42:06.840 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:42:06.840 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:42:06.840 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:42:06.840 | 99.00th=[12780], 99.50th=[13042], 99.90th=[52167], 99.95th=[52167], 00:42:06.840 | 99.99th=[52167] 00:42:06.840 bw ( KiB/s): min=32512, max=36608, per=32.71%, avg=35430.40, stdev=1011.80, samples=20 00:42:06.840 iops : min= 254, max= 286, avg=276.80, stdev= 7.90, samples=20 00:42:06.840 lat (msec) : 10=16.68%, 20=83.14%, 50=0.07%, 100=0.11% 00:42:06.840 cpu : usr=95.53%, sys=4.12%, ctx=25, majf=0, minf=133 00:42:06.840 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.840 issued rwts: total=2770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.840 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:06.840 00:42:06.840 Run status group 0 (all jobs): 00:42:06.840 READ: bw=106MiB/s (111MB/s), 34.5MiB/s-36.4MiB/s (36.1MB/s-38.1MB/s), io=1062MiB (1114MB), run=10005-10044msec 00:42:06.840 11:37:02 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:06.840 11:37:02 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:06.840 11:37:02 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:06.840 11:37:02 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:06.840 11:37:02 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:06.840 11:37:02 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:06.840 11:37:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:06.840 11:37:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:06.840 11:37:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:06.840 11:37:02 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:06.840 11:37:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:06.840 11:37:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:06.840 11:37:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:06.840 00:42:06.840 real 0m11.140s 00:42:06.840 user 0m35.563s 00:42:06.840 sys 0m1.651s 00:42:06.840 11:37:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:06.840 11:37:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:06.840 ************************************ 00:42:06.840 END TEST fio_dif_digest 00:42:06.840 ************************************ 00:42:06.840 11:37:02 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:06.840 11:37:02 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:06.840 11:37:02 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:06.840 11:37:02 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:42:06.840 11:37:02 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:06.840 11:37:02 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:42:06.840 11:37:02 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:06.840 11:37:02 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:06.840 rmmod nvme_tcp 00:42:06.840 rmmod nvme_fabrics 00:42:06.840 rmmod nvme_keyring 00:42:06.840 11:37:02 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:06.840 11:37:03 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:42:06.840 11:37:03 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:42:06.840 11:37:03 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 2363069 ']' 00:42:06.840 11:37:03 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 2363069 00:42:06.840 11:37:03 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 2363069 ']' 00:42:06.840 11:37:03 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 2363069 00:42:06.840 11:37:03 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:42:06.840 11:37:03 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:06.840 11:37:03 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2363069 00:42:06.840 11:37:03 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:06.840 11:37:03 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:06.840 11:37:03 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2363069' 00:42:06.840 killing process with pid 2363069 00:42:06.840 11:37:03 nvmf_dif -- common/autotest_common.sh@969 -- # kill 2363069 00:42:06.840 11:37:03 nvmf_dif -- common/autotest_common.sh@974 -- # wait 2363069 00:42:06.840 11:37:03 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:42:06.840 11:37:03 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:08.219 Waiting for block devices as requested 00:42:08.219 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:42:08.478 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:08.478 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:08.478 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:08.479 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:08.738 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:08.738 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:08.738 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:08.998 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:08.998 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:08.998 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:08.998 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:09.258 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:09.258 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:09.258 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:09.518 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:09.518 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:09.518 11:37:07 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:09.518 11:37:07 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:09.518 11:37:07 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:42:09.518 11:37:07 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:42:09.518 11:37:07 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:09.518 11:37:07 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:42:09.518 11:37:07 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:09.518 11:37:07 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:09.518 11:37:07 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:09.518 11:37:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:09.518 11:37:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:12.056 11:37:09 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:12.056 00:42:12.056 real 1m12.114s 00:42:12.056 user 7m9.765s 00:42:12.056 sys 0m19.742s 00:42:12.056 11:37:09 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:12.056 11:37:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:12.056 ************************************ 00:42:12.056 END TEST nvmf_dif 00:42:12.056 ************************************ 00:42:12.056 11:37:09 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:12.056 11:37:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:12.056 11:37:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:12.056 11:37:09 -- common/autotest_common.sh@10 -- # set +x 00:42:12.056 ************************************ 00:42:12.056 START TEST nvmf_abort_qd_sizes 00:42:12.056 ************************************ 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:12.056 * Looking for test storage... 00:42:12.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:12.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.056 --rc genhtml_branch_coverage=1 00:42:12.056 --rc genhtml_function_coverage=1 00:42:12.056 --rc genhtml_legend=1 00:42:12.056 --rc geninfo_all_blocks=1 00:42:12.056 --rc geninfo_unexecuted_blocks=1 00:42:12.056 00:42:12.056 ' 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:12.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.056 --rc genhtml_branch_coverage=1 00:42:12.056 --rc genhtml_function_coverage=1 00:42:12.056 --rc genhtml_legend=1 00:42:12.056 --rc geninfo_all_blocks=1 00:42:12.056 --rc geninfo_unexecuted_blocks=1 00:42:12.056 00:42:12.056 ' 00:42:12.056 11:37:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:12.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.056 --rc genhtml_branch_coverage=1 00:42:12.056 --rc genhtml_function_coverage=1 00:42:12.056 --rc genhtml_legend=1 00:42:12.056 --rc geninfo_all_blocks=1 00:42:12.056 --rc geninfo_unexecuted_blocks=1 00:42:12.057 00:42:12.057 ' 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:12.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.057 --rc genhtml_branch_coverage=1 00:42:12.057 --rc genhtml_function_coverage=1 00:42:12.057 --rc genhtml_legend=1 00:42:12.057 --rc geninfo_all_blocks=1 00:42:12.057 --rc geninfo_unexecuted_blocks=1 00:42:12.057 00:42:12.057 ' 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:12.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:42:12.057 11:37:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:17.334 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:17.334 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:17.334 Found net devices under 0000:af:00.0: cvl_0_0 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:17.334 Found net devices under 0000:af:00.1: cvl_0_1 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:17.334 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:17.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:17.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:42:17.335 00:42:17.335 --- 10.0.0.2 ping statistics --- 00:42:17.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:17.335 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:17.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:17.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:42:17.335 00:42:17.335 --- 10.0.0.1 ping statistics --- 00:42:17.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:17.335 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:42:17.335 11:37:14 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:19.870 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:42:19.870 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:42:19.870 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:42:19.870 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:42:19.870 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:42:19.870 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:42:19.870 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:42:19.870 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:42:19.870 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:42:19.870 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:42:19.870 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:42:19.870 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:42:19.870 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:42:19.870 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:42:19.870 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:42:19.870 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:42:20.808 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=2379268 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 2379268 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 2379268 ']' 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:20.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:20.808 11:37:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:20.808 [2024-10-06 11:37:18.288018] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:42:20.808 [2024-10-06 11:37:18.288067] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:20.808 [2024-10-06 11:37:18.347174] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:21.066 [2024-10-06 11:37:18.387247] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:21.066 [2024-10-06 11:37:18.387287] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:21.066 [2024-10-06 11:37:18.387295] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:21.066 [2024-10-06 11:37:18.387301] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:21.066 [2024-10-06 11:37:18.387306] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:21.066 [2024-10-06 11:37:18.388675] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:42:21.066 [2024-10-06 11:37:18.388772] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:42:21.066 [2024-10-06 11:37:18.388859] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:42:21.066 [2024-10-06 11:37:18.388860] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:21.066 11:37:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:21.066 ************************************ 00:42:21.066 START TEST spdk_target_abort 00:42:21.066 ************************************ 00:42:21.066 11:37:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:42:21.066 11:37:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:21.066 11:37:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:42:21.066 11:37:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.067 11:37:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:24.357 spdk_targetn1 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:24.357 [2024-10-06 11:37:21.415718] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:24.357 [2024-10-06 11:37:21.444736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:24.357 11:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:27.649 Initializing NVMe Controllers 00:42:27.649 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:27.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:27.649 Initialization complete. Launching workers. 00:42:27.649 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15664, failed: 0 00:42:27.649 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1402, failed to submit 14262 00:42:27.649 success 793, unsuccessful 609, failed 0 00:42:27.649 11:37:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:27.649 11:37:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:30.942 Initializing NVMe Controllers 00:42:30.942 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:30.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:30.942 Initialization complete. Launching workers. 00:42:30.942 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8625, failed: 0 00:42:30.942 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1263, failed to submit 7362 00:42:30.942 success 309, unsuccessful 954, failed 0 00:42:30.942 11:37:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:30.942 11:37:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:34.234 Initializing NVMe Controllers 00:42:34.234 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:34.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:34.234 Initialization complete. Launching workers. 00:42:34.234 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37556, failed: 0 00:42:34.234 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2800, failed to submit 34756 00:42:34.234 success 605, unsuccessful 2195, failed 0 00:42:34.234 11:37:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:34.234 11:37:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:34.234 11:37:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:34.234 11:37:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:34.234 11:37:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:34.234 11:37:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:34.234 11:37:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:35.170 11:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:35.170 11:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2379268 00:42:35.170 11:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 2379268 ']' 00:42:35.170 11:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 2379268 00:42:35.170 11:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:42:35.170 11:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:35.170 11:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2379268 00:42:35.170 11:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:35.170 11:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:35.170 11:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2379268' 00:42:35.170 killing process with pid 2379268 00:42:35.170 11:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 2379268 00:42:35.170 11:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 2379268 00:42:35.429 00:42:35.429 real 0m14.220s 00:42:35.429 user 0m54.267s 00:42:35.429 sys 0m2.475s 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:35.429 ************************************ 00:42:35.429 END TEST spdk_target_abort 00:42:35.429 ************************************ 00:42:35.429 11:37:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:35.429 11:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:35.429 11:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:35.429 11:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:35.429 ************************************ 00:42:35.429 START TEST kernel_target_abort 00:42:35.429 ************************************ 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:35.429 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:37.963 Waiting for block devices as requested 00:42:37.963 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:42:38.222 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:38.223 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:38.223 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:38.223 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:38.482 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:38.482 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:38.482 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:38.742 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:38.742 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:38.742 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:38.742 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:39.002 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:39.002 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:39.002 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:39.261 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:39.261 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:39.261 No valid GPT data, bailing 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:42:39.261 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:42:39.521 00:42:39.521 Discovery Log Number of Records 2, Generation counter 2 00:42:39.521 =====Discovery Log Entry 0====== 00:42:39.521 trtype: tcp 00:42:39.521 adrfam: ipv4 00:42:39.521 subtype: current discovery subsystem 00:42:39.521 treq: not specified, sq flow control disable supported 00:42:39.521 portid: 1 00:42:39.521 trsvcid: 4420 00:42:39.521 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:39.521 traddr: 10.0.0.1 00:42:39.521 eflags: none 00:42:39.521 sectype: none 00:42:39.521 =====Discovery Log Entry 1====== 00:42:39.521 trtype: tcp 00:42:39.521 adrfam: ipv4 00:42:39.521 subtype: nvme subsystem 00:42:39.521 treq: not specified, sq flow control disable supported 00:42:39.521 portid: 1 00:42:39.521 trsvcid: 4420 00:42:39.521 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:39.521 traddr: 10.0.0.1 00:42:39.521 eflags: none 00:42:39.521 sectype: none 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:39.521 11:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:42.820 Initializing NVMe Controllers 00:42:42.820 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:42.820 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:42.820 Initialization complete. Launching workers. 00:42:42.820 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 81440, failed: 0 00:42:42.820 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 81440, failed to submit 0 00:42:42.820 success 0, unsuccessful 81440, failed 0 00:42:42.820 11:37:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:42.820 11:37:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:46.111 Initializing NVMe Controllers 00:42:46.111 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:46.111 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:46.111 Initialization complete. Launching workers. 00:42:46.111 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 131982, failed: 0 00:42:46.111 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33162, failed to submit 98820 00:42:46.111 success 0, unsuccessful 33162, failed 0 00:42:46.111 11:37:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:46.111 11:37:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:48.809 Initializing NVMe Controllers 00:42:48.809 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:48.809 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:48.809 Initialization complete. Launching workers. 00:42:48.809 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 126548, failed: 0 00:42:48.809 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31646, failed to submit 94902 00:42:48.809 success 0, unsuccessful 31646, failed 0 00:42:48.809 11:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:48.809 11:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:48.809 11:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:42:48.809 11:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:48.809 11:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:48.809 11:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:48.809 11:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:48.809 11:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:42:48.809 11:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:42:48.809 11:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:51.346 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:42:51.346 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:42:51.346 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:42:51.346 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:42:51.346 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:42:51.346 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:42:51.346 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:42:51.346 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:42:51.346 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:42:51.605 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:42:51.605 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:42:51.605 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:42:51.605 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:42:51.605 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:42:51.605 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:42:51.605 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:42:52.539 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:42:52.539 00:42:52.539 real 0m17.040s 00:42:52.539 user 0m8.088s 00:42:52.539 sys 0m5.122s 00:42:52.539 11:37:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:52.539 11:37:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:52.539 ************************************ 00:42:52.539 END TEST kernel_target_abort 00:42:52.539 ************************************ 00:42:52.539 11:37:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:52.539 11:37:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:52.539 11:37:49 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:52.539 11:37:49 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:52.539 11:37:49 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:52.540 11:37:49 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:52.540 11:37:49 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:52.540 11:37:49 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:52.540 rmmod nvme_tcp 00:42:52.540 rmmod nvme_fabrics 00:42:52.540 rmmod nvme_keyring 00:42:52.540 11:37:50 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:52.540 11:37:50 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:52.540 11:37:50 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:52.540 11:37:50 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 2379268 ']' 00:42:52.540 11:37:50 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 2379268 00:42:52.540 11:37:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 2379268 ']' 00:42:52.540 11:37:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 2379268 00:42:52.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2379268) - No such process 00:42:52.540 11:37:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 2379268 is not found' 00:42:52.540 Process with pid 2379268 is not found 00:42:52.540 11:37:50 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:42:52.540 11:37:50 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:55.074 Waiting for block devices as requested 00:42:55.074 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:42:55.074 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:55.074 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:55.334 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:55.334 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:55.334 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:55.334 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:55.593 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:55.593 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:55.593 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:55.853 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:55.853 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:55.853 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:55.853 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:56.112 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:56.112 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:56.112 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:56.371 11:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:56.371 11:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:56.371 11:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:56.371 11:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:42:56.371 11:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:56.371 11:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:42:56.371 11:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:56.371 11:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:56.371 11:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:56.371 11:37:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:56.371 11:37:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:58.278 11:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:58.278 00:42:58.278 real 0m46.627s 00:42:58.278 user 1m6.175s 00:42:58.278 sys 0m15.495s 00:42:58.278 11:37:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:58.278 11:37:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:58.278 ************************************ 00:42:58.278 END TEST nvmf_abort_qd_sizes 00:42:58.278 ************************************ 00:42:58.278 11:37:55 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:58.278 11:37:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:58.278 11:37:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:58.278 11:37:55 -- common/autotest_common.sh@10 -- # set +x 00:42:58.538 ************************************ 00:42:58.538 START TEST keyring_file 00:42:58.538 ************************************ 00:42:58.538 11:37:55 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:58.538 * Looking for test storage... 00:42:58.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:58.538 11:37:55 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:58.538 11:37:55 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:42:58.538 11:37:55 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:58.538 11:37:56 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:58.538 11:37:56 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:58.538 11:37:56 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:58.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:58.538 --rc genhtml_branch_coverage=1 00:42:58.538 --rc genhtml_function_coverage=1 00:42:58.538 --rc genhtml_legend=1 00:42:58.538 --rc geninfo_all_blocks=1 00:42:58.538 --rc geninfo_unexecuted_blocks=1 00:42:58.538 00:42:58.538 ' 00:42:58.538 11:37:56 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:58.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:58.538 --rc genhtml_branch_coverage=1 00:42:58.538 --rc genhtml_function_coverage=1 00:42:58.538 --rc genhtml_legend=1 00:42:58.538 --rc geninfo_all_blocks=1 00:42:58.538 --rc geninfo_unexecuted_blocks=1 00:42:58.538 00:42:58.538 ' 00:42:58.538 11:37:56 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:58.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:58.538 --rc genhtml_branch_coverage=1 00:42:58.538 --rc genhtml_function_coverage=1 00:42:58.538 --rc genhtml_legend=1 00:42:58.538 --rc geninfo_all_blocks=1 00:42:58.538 --rc geninfo_unexecuted_blocks=1 00:42:58.538 00:42:58.538 ' 00:42:58.538 11:37:56 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:58.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:58.538 --rc genhtml_branch_coverage=1 00:42:58.538 --rc genhtml_function_coverage=1 00:42:58.538 --rc genhtml_legend=1 00:42:58.538 --rc geninfo_all_blocks=1 00:42:58.538 --rc geninfo_unexecuted_blocks=1 00:42:58.538 00:42:58.538 ' 00:42:58.538 11:37:56 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:58.538 11:37:56 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:58.538 11:37:56 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:58.538 11:37:56 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.538 11:37:56 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.538 11:37:56 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.538 11:37:56 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:58.538 11:37:56 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:58.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:58.538 11:37:56 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:58.538 11:37:56 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:58.538 11:37:56 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:58.538 11:37:56 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:58.538 11:37:56 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:58.538 11:37:56 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:58.538 11:37:56 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:58.538 11:37:56 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:58.538 11:37:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:58.538 11:37:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:58.538 11:37:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:58.539 11:37:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:58.539 11:37:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:58.539 11:37:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nmkrLhJlGH 00:42:58.539 11:37:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:58.539 11:37:56 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:58.539 11:37:56 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:42:58.539 11:37:56 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:58.539 11:37:56 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:42:58.539 11:37:56 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:42:58.539 11:37:56 keyring_file -- nvmf/common.sh@731 -- # python - 00:42:58.798 11:37:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nmkrLhJlGH 00:42:58.798 11:37:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nmkrLhJlGH 00:42:58.798 11:37:56 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.nmkrLhJlGH 00:42:58.798 11:37:56 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:58.798 11:37:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:58.798 11:37:56 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:58.798 11:37:56 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:58.798 11:37:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:58.798 11:37:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:58.798 11:37:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kERqngth26 00:42:58.798 11:37:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:58.798 11:37:56 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:58.798 11:37:56 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:42:58.798 11:37:56 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:58.798 11:37:56 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:42:58.798 11:37:56 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:42:58.798 11:37:56 keyring_file -- nvmf/common.sh@731 -- # python - 00:42:58.798 11:37:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kERqngth26 00:42:58.798 11:37:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kERqngth26 00:42:58.798 11:37:56 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.kERqngth26 00:42:58.798 11:37:56 keyring_file -- keyring/file.sh@30 -- # tgtpid=2387843 00:42:58.798 11:37:56 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:58.798 11:37:56 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2387843 00:42:58.798 11:37:56 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2387843 ']' 00:42:58.798 11:37:56 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:58.798 11:37:56 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:58.798 11:37:56 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:58.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:58.798 11:37:56 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:58.798 11:37:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:58.798 [2024-10-06 11:37:56.256978] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:42:58.798 [2024-10-06 11:37:56.257027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2387843 ] 00:42:58.798 [2024-10-06 11:37:56.309285] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:58.798 [2024-10-06 11:37:56.349400] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:42:59.057 11:37:56 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:59.057 11:37:56 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:59.057 11:37:56 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:59.057 11:37:56 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:59.057 11:37:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:59.057 [2024-10-06 11:37:56.541392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:59.057 null0 00:42:59.057 [2024-10-06 11:37:56.573440] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:59.057 [2024-10-06 11:37:56.573722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:59.057 11:37:56 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:59.057 11:37:56 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:59.057 11:37:56 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:59.057 11:37:56 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:59.057 11:37:56 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:42:59.057 11:37:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:59.057 11:37:56 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:42:59.057 11:37:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:59.057 11:37:56 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:59.057 11:37:56 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:59.057 11:37:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:59.057 [2024-10-06 11:37:56.601503] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:59.057 request: 00:42:59.057 { 00:42:59.058 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:59.058 "secure_channel": false, 00:42:59.058 "listen_address": { 00:42:59.058 "trtype": "tcp", 00:42:59.058 "traddr": "127.0.0.1", 00:42:59.058 "trsvcid": "4420" 00:42:59.058 }, 00:42:59.058 "method": "nvmf_subsystem_add_listener", 00:42:59.058 "req_id": 1 00:42:59.058 } 00:42:59.058 Got JSON-RPC error response 00:42:59.058 response: 00:42:59.058 { 00:42:59.058 "code": -32602, 00:42:59.058 "message": "Invalid parameters" 00:42:59.058 } 00:42:59.058 11:37:56 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:42:59.058 11:37:56 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:59.058 11:37:56 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:59.058 11:37:56 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:59.058 11:37:56 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:59.058 11:37:56 keyring_file -- keyring/file.sh@47 -- # bperfpid=2387853 00:42:59.058 11:37:56 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2387853 /var/tmp/bperf.sock 00:42:59.058 11:37:56 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2387853 ']' 00:42:59.058 11:37:56 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:59.058 11:37:56 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:59.058 11:37:56 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:59.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:59.058 11:37:56 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:59.058 11:37:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:59.058 11:37:56 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:59.317 [2024-10-06 11:37:56.654228] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:42:59.317 [2024-10-06 11:37:56.654271] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2387853 ] 00:42:59.317 [2024-10-06 11:37:56.709499] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:59.317 [2024-10-06 11:37:56.749504] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:42:59.317 11:37:56 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:59.317 11:37:56 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:59.317 11:37:56 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nmkrLhJlGH 00:42:59.317 11:37:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nmkrLhJlGH 00:42:59.576 11:37:57 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.kERqngth26 00:42:59.576 11:37:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.kERqngth26 00:42:59.834 11:37:57 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:59.834 11:37:57 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:59.834 11:37:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:59.834 11:37:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:59.834 11:37:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:59.834 11:37:57 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.nmkrLhJlGH == \/\t\m\p\/\t\m\p\.\n\m\k\r\L\h\J\l\G\H ]] 00:42:59.834 11:37:57 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:59.835 11:37:57 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:59.835 11:37:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:59.835 11:37:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:59.835 11:37:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:00.093 11:37:57 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.kERqngth26 == \/\t\m\p\/\t\m\p\.\k\E\R\q\n\g\t\h\2\6 ]] 00:43:00.093 11:37:57 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:43:00.093 11:37:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:00.093 11:37:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:00.093 11:37:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:00.093 11:37:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:00.093 11:37:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:00.352 11:37:57 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:43:00.352 11:37:57 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:43:00.352 11:37:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:00.352 11:37:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:00.352 11:37:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:00.352 11:37:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:00.352 11:37:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:00.612 11:37:57 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:43:00.612 11:37:57 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:00.612 11:37:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:00.612 [2024-10-06 11:37:58.149132] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:00.871 nvme0n1 00:43:00.871 11:37:58 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:43:00.871 11:37:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:00.871 11:37:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:00.872 11:37:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:00.872 11:37:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:00.872 11:37:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:00.872 11:37:58 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:43:00.872 11:37:58 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:43:00.872 11:37:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:00.872 11:37:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:00.872 11:37:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:00.872 11:37:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:00.872 11:37:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:01.130 11:37:58 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:43:01.130 11:37:58 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:01.130 Running I/O for 1 seconds... 00:43:02.509 14241.00 IOPS, 55.63 MiB/s 00:43:02.509 Latency(us) 00:43:02.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:02.509 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:43:02.509 nvme0n1 : 1.01 14285.30 55.80 0.00 0.00 8939.38 2652.65 12982.37 00:43:02.509 =================================================================================================================== 00:43:02.509 Total : 14285.30 55.80 0.00 0.00 8939.38 2652.65 12982.37 00:43:02.509 { 00:43:02.509 "results": [ 00:43:02.509 { 00:43:02.509 "job": "nvme0n1", 00:43:02.509 "core_mask": "0x2", 00:43:02.509 "workload": "randrw", 00:43:02.509 "percentage": 50, 00:43:02.509 "status": "finished", 00:43:02.509 "queue_depth": 128, 00:43:02.509 "io_size": 4096, 00:43:02.509 "runtime": 1.005999, 00:43:02.509 "iops": 14285.302470479593, 00:43:02.509 "mibps": 55.80196277531091, 00:43:02.509 "io_failed": 0, 00:43:02.509 "io_timeout": 0, 00:43:02.509 "avg_latency_us": 8939.375353141744, 00:43:02.509 "min_latency_us": 2652.647619047619, 00:43:02.509 "max_latency_us": 12982.369523809524 00:43:02.509 } 00:43:02.509 ], 00:43:02.509 "core_count": 1 00:43:02.509 } 00:43:02.509 11:37:59 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:02.509 11:37:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:02.509 11:37:59 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:43:02.509 11:37:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:02.509 11:37:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:02.509 11:37:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:02.509 11:37:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:02.509 11:37:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:02.769 11:38:00 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:43:02.769 11:38:00 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:43:02.769 11:38:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:02.769 11:38:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:02.769 11:38:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:02.769 11:38:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:02.769 11:38:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:02.769 11:38:00 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:43:02.769 11:38:00 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:02.769 11:38:00 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:43:02.769 11:38:00 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:02.769 11:38:00 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:43:02.769 11:38:00 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:02.769 11:38:00 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:43:02.769 11:38:00 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:02.769 11:38:00 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:02.770 11:38:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:03.030 [2024-10-06 11:38:00.497891] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:03.030 [2024-10-06 11:38:00.498549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d3d30 (107): Transport endpoint is not connected 00:43:03.030 [2024-10-06 11:38:00.499543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d3d30 (9): Bad file descriptor 00:43:03.030 [2024-10-06 11:38:00.500544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:03.030 [2024-10-06 11:38:00.500556] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:03.030 [2024-10-06 11:38:00.500563] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:03.030 [2024-10-06 11:38:00.500574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:03.031 request: 00:43:03.031 { 00:43:03.031 "name": "nvme0", 00:43:03.031 "trtype": "tcp", 00:43:03.031 "traddr": "127.0.0.1", 00:43:03.031 "adrfam": "ipv4", 00:43:03.031 "trsvcid": "4420", 00:43:03.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:03.031 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:03.031 "prchk_reftag": false, 00:43:03.031 "prchk_guard": false, 00:43:03.031 "hdgst": false, 00:43:03.031 "ddgst": false, 00:43:03.031 "psk": "key1", 00:43:03.031 "allow_unrecognized_csi": false, 00:43:03.031 "method": "bdev_nvme_attach_controller", 00:43:03.031 "req_id": 1 00:43:03.031 } 00:43:03.031 Got JSON-RPC error response 00:43:03.031 response: 00:43:03.031 { 00:43:03.031 "code": -5, 00:43:03.031 "message": "Input/output error" 00:43:03.031 } 00:43:03.031 11:38:00 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:43:03.031 11:38:00 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:03.031 11:38:00 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:03.031 11:38:00 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:03.031 11:38:00 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:43:03.031 11:38:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:03.031 11:38:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:03.031 11:38:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:03.031 11:38:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:03.031 11:38:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:03.290 11:38:00 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:43:03.290 11:38:00 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:43:03.290 11:38:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:03.290 11:38:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:03.290 11:38:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:03.290 11:38:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:03.290 11:38:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:03.549 11:38:00 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:43:03.549 11:38:00 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:43:03.549 11:38:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:03.549 11:38:01 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:43:03.549 11:38:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:43:03.809 11:38:01 keyring_file -- keyring/file.sh@78 -- # jq length 00:43:03.809 11:38:01 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:43:03.809 11:38:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:04.068 11:38:01 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:43:04.068 11:38:01 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.nmkrLhJlGH 00:43:04.068 11:38:01 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.nmkrLhJlGH 00:43:04.068 11:38:01 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:43:04.068 11:38:01 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.nmkrLhJlGH 00:43:04.068 11:38:01 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:43:04.068 11:38:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:04.068 11:38:01 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:43:04.068 11:38:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:04.068 11:38:01 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nmkrLhJlGH 00:43:04.068 11:38:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nmkrLhJlGH 00:43:04.326 [2024-10-06 11:38:01.645577] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.nmkrLhJlGH': 0100660 00:43:04.326 [2024-10-06 11:38:01.645604] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:43:04.326 request: 00:43:04.326 { 00:43:04.326 "name": "key0", 00:43:04.326 "path": "/tmp/tmp.nmkrLhJlGH", 00:43:04.326 "method": "keyring_file_add_key", 00:43:04.326 "req_id": 1 00:43:04.326 } 00:43:04.326 Got JSON-RPC error response 00:43:04.326 response: 00:43:04.326 { 00:43:04.326 "code": -1, 00:43:04.326 "message": "Operation not permitted" 00:43:04.326 } 00:43:04.326 11:38:01 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:43:04.326 11:38:01 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:04.326 11:38:01 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:04.326 11:38:01 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:04.326 11:38:01 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.nmkrLhJlGH 00:43:04.326 11:38:01 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nmkrLhJlGH 00:43:04.326 11:38:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nmkrLhJlGH 00:43:04.326 11:38:01 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.nmkrLhJlGH 00:43:04.326 11:38:01 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:43:04.326 11:38:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:04.326 11:38:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:04.326 11:38:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:04.326 11:38:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:04.326 11:38:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:04.585 11:38:02 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:43:04.585 11:38:02 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:04.585 11:38:02 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:43:04.585 11:38:02 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:04.585 11:38:02 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:43:04.585 11:38:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:04.585 11:38:02 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:43:04.585 11:38:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:04.585 11:38:02 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:04.585 11:38:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:04.844 [2024-10-06 11:38:02.223111] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.nmkrLhJlGH': No such file or directory 00:43:04.844 [2024-10-06 11:38:02.223134] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:43:04.844 [2024-10-06 11:38:02.223151] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:43:04.844 [2024-10-06 11:38:02.223158] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:43:04.844 [2024-10-06 11:38:02.223177] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:43:04.844 [2024-10-06 11:38:02.223184] bdev_nvme.c:6449:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:43:04.844 request: 00:43:04.844 { 00:43:04.844 "name": "nvme0", 00:43:04.844 "trtype": "tcp", 00:43:04.844 "traddr": "127.0.0.1", 00:43:04.844 "adrfam": "ipv4", 00:43:04.844 "trsvcid": "4420", 00:43:04.844 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:04.844 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:04.844 "prchk_reftag": false, 00:43:04.844 "prchk_guard": false, 00:43:04.844 "hdgst": false, 00:43:04.844 "ddgst": false, 00:43:04.844 "psk": "key0", 00:43:04.844 "allow_unrecognized_csi": false, 00:43:04.844 "method": "bdev_nvme_attach_controller", 00:43:04.844 "req_id": 1 00:43:04.844 } 00:43:04.844 Got JSON-RPC error response 00:43:04.844 response: 00:43:04.844 { 00:43:04.844 "code": -19, 00:43:04.844 "message": "No such device" 00:43:04.844 } 00:43:04.844 11:38:02 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:43:04.844 11:38:02 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:04.845 11:38:02 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:04.845 11:38:02 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:04.845 11:38:02 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:43:04.845 11:38:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:05.104 11:38:02 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:05.104 11:38:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:05.104 11:38:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:05.104 11:38:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:05.104 11:38:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:05.104 11:38:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:05.104 11:38:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.RvH911l8LE 00:43:05.104 11:38:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:05.104 11:38:02 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:05.104 11:38:02 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:43:05.104 11:38:02 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:43:05.104 11:38:02 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:43:05.104 11:38:02 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:43:05.104 11:38:02 keyring_file -- nvmf/common.sh@731 -- # python - 00:43:05.104 11:38:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.RvH911l8LE 00:43:05.104 11:38:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.RvH911l8LE 00:43:05.104 11:38:02 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.RvH911l8LE 00:43:05.104 11:38:02 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.RvH911l8LE 00:43:05.104 11:38:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.RvH911l8LE 00:43:05.104 11:38:02 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:05.104 11:38:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:05.363 nvme0n1 00:43:05.363 11:38:02 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:43:05.363 11:38:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:05.363 11:38:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:05.363 11:38:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:05.363 11:38:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:05.363 11:38:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:05.622 11:38:03 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:43:05.623 11:38:03 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:43:05.623 11:38:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:05.883 11:38:03 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:43:05.883 11:38:03 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:43:05.883 11:38:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:05.883 11:38:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:05.883 11:38:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:06.142 11:38:03 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:43:06.142 11:38:03 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:43:06.142 11:38:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:06.142 11:38:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:06.142 11:38:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:06.142 11:38:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:06.142 11:38:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:06.142 11:38:03 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:43:06.142 11:38:03 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:06.142 11:38:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:06.402 11:38:03 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:43:06.402 11:38:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:06.402 11:38:03 keyring_file -- keyring/file.sh@105 -- # jq length 00:43:06.661 11:38:04 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:43:06.661 11:38:04 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.RvH911l8LE 00:43:06.661 11:38:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.RvH911l8LE 00:43:06.921 11:38:04 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.kERqngth26 00:43:06.921 11:38:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.kERqngth26 00:43:06.921 11:38:04 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:06.921 11:38:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:07.181 nvme0n1 00:43:07.181 11:38:04 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:43:07.181 11:38:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:43:07.441 11:38:04 keyring_file -- keyring/file.sh@113 -- # config='{ 00:43:07.441 "subsystems": [ 00:43:07.441 { 00:43:07.441 "subsystem": "keyring", 00:43:07.441 "config": [ 00:43:07.441 { 00:43:07.441 "method": "keyring_file_add_key", 00:43:07.441 "params": { 00:43:07.441 "name": "key0", 00:43:07.441 "path": "/tmp/tmp.RvH911l8LE" 00:43:07.441 } 00:43:07.441 }, 00:43:07.441 { 00:43:07.441 "method": "keyring_file_add_key", 00:43:07.441 "params": { 00:43:07.441 "name": "key1", 00:43:07.441 "path": "/tmp/tmp.kERqngth26" 00:43:07.441 } 00:43:07.441 } 00:43:07.441 ] 00:43:07.441 }, 00:43:07.441 { 00:43:07.441 "subsystem": "iobuf", 00:43:07.441 "config": [ 00:43:07.441 { 00:43:07.441 "method": "iobuf_set_options", 00:43:07.441 "params": { 00:43:07.441 "small_pool_count": 8192, 00:43:07.441 "large_pool_count": 1024, 00:43:07.441 "small_bufsize": 8192, 00:43:07.441 "large_bufsize": 135168 00:43:07.441 } 00:43:07.441 } 00:43:07.441 ] 00:43:07.441 }, 00:43:07.441 { 00:43:07.441 "subsystem": "sock", 00:43:07.441 "config": [ 00:43:07.441 { 00:43:07.441 "method": "sock_set_default_impl", 00:43:07.441 "params": { 00:43:07.441 "impl_name": "posix" 00:43:07.442 } 00:43:07.442 }, 00:43:07.442 { 00:43:07.442 "method": "sock_impl_set_options", 00:43:07.442 "params": { 00:43:07.442 "impl_name": "ssl", 00:43:07.442 "recv_buf_size": 4096, 00:43:07.442 "send_buf_size": 4096, 00:43:07.442 "enable_recv_pipe": true, 00:43:07.442 "enable_quickack": false, 00:43:07.442 "enable_placement_id": 0, 00:43:07.442 "enable_zerocopy_send_server": true, 00:43:07.442 "enable_zerocopy_send_client": false, 00:43:07.442 "zerocopy_threshold": 0, 00:43:07.442 "tls_version": 0, 00:43:07.442 "enable_ktls": false 00:43:07.442 } 00:43:07.442 }, 00:43:07.442 { 00:43:07.442 "method": "sock_impl_set_options", 00:43:07.442 "params": { 00:43:07.442 "impl_name": "posix", 00:43:07.442 "recv_buf_size": 2097152, 00:43:07.442 "send_buf_size": 2097152, 00:43:07.442 "enable_recv_pipe": true, 00:43:07.442 "enable_quickack": false, 00:43:07.442 "enable_placement_id": 0, 00:43:07.442 "enable_zerocopy_send_server": true, 00:43:07.442 "enable_zerocopy_send_client": false, 00:43:07.442 "zerocopy_threshold": 0, 00:43:07.442 "tls_version": 0, 00:43:07.442 "enable_ktls": false 00:43:07.442 } 00:43:07.442 } 00:43:07.442 ] 00:43:07.442 }, 00:43:07.442 { 00:43:07.442 "subsystem": "vmd", 00:43:07.442 "config": [] 00:43:07.442 }, 00:43:07.442 { 00:43:07.442 "subsystem": "accel", 00:43:07.442 "config": [ 00:43:07.442 { 00:43:07.442 "method": "accel_set_options", 00:43:07.442 "params": { 00:43:07.442 "small_cache_size": 128, 00:43:07.442 "large_cache_size": 16, 00:43:07.442 "task_count": 2048, 00:43:07.442 "sequence_count": 2048, 00:43:07.442 "buf_count": 2048 00:43:07.442 } 00:43:07.442 } 00:43:07.442 ] 00:43:07.442 }, 00:43:07.442 { 00:43:07.442 "subsystem": "bdev", 00:43:07.442 "config": [ 00:43:07.442 { 00:43:07.442 "method": "bdev_set_options", 00:43:07.442 "params": { 00:43:07.442 "bdev_io_pool_size": 65535, 00:43:07.442 "bdev_io_cache_size": 256, 00:43:07.442 "bdev_auto_examine": true, 00:43:07.442 "iobuf_small_cache_size": 128, 00:43:07.442 "iobuf_large_cache_size": 16 00:43:07.442 } 00:43:07.442 }, 00:43:07.442 { 00:43:07.442 "method": "bdev_raid_set_options", 00:43:07.442 "params": { 00:43:07.442 "process_window_size_kb": 1024, 00:43:07.442 "process_max_bandwidth_mb_sec": 0 00:43:07.442 } 00:43:07.442 }, 00:43:07.442 { 00:43:07.442 "method": "bdev_iscsi_set_options", 00:43:07.442 "params": { 00:43:07.442 "timeout_sec": 30 00:43:07.442 } 00:43:07.442 }, 00:43:07.442 { 00:43:07.442 "method": "bdev_nvme_set_options", 00:43:07.442 "params": { 00:43:07.442 "action_on_timeout": "none", 00:43:07.442 "timeout_us": 0, 00:43:07.442 "timeout_admin_us": 0, 00:43:07.442 "keep_alive_timeout_ms": 10000, 00:43:07.442 "arbitration_burst": 0, 00:43:07.442 "low_priority_weight": 0, 00:43:07.442 "medium_priority_weight": 0, 00:43:07.442 "high_priority_weight": 0, 00:43:07.442 "nvme_adminq_poll_period_us": 10000, 00:43:07.442 "nvme_ioq_poll_period_us": 0, 00:43:07.442 "io_queue_requests": 512, 00:43:07.442 "delay_cmd_submit": true, 00:43:07.442 "transport_retry_count": 4, 00:43:07.442 "bdev_retry_count": 3, 00:43:07.442 "transport_ack_timeout": 0, 00:43:07.442 "ctrlr_loss_timeout_sec": 0, 00:43:07.442 "reconnect_delay_sec": 0, 00:43:07.442 "fast_io_fail_timeout_sec": 0, 00:43:07.442 "disable_auto_failback": false, 00:43:07.442 "generate_uuids": false, 00:43:07.442 "transport_tos": 0, 00:43:07.442 "nvme_error_stat": false, 00:43:07.442 "rdma_srq_size": 0, 00:43:07.442 "io_path_stat": false, 00:43:07.442 "allow_accel_sequence": false, 00:43:07.442 "rdma_max_cq_size": 0, 00:43:07.442 "rdma_cm_event_timeout_ms": 0, 00:43:07.442 "dhchap_digests": [ 00:43:07.442 "sha256", 00:43:07.442 "sha384", 00:43:07.442 "sha512" 00:43:07.442 ], 00:43:07.442 "dhchap_dhgroups": [ 00:43:07.442 "null", 00:43:07.442 "ffdhe2048", 00:43:07.442 "ffdhe3072", 00:43:07.442 "ffdhe4096", 00:43:07.442 "ffdhe6144", 00:43:07.442 "ffdhe8192" 00:43:07.442 ] 00:43:07.442 } 00:43:07.442 }, 00:43:07.442 { 00:43:07.442 "method": "bdev_nvme_attach_controller", 00:43:07.442 "params": { 00:43:07.442 "name": "nvme0", 00:43:07.442 "trtype": "TCP", 00:43:07.442 "adrfam": "IPv4", 00:43:07.442 "traddr": "127.0.0.1", 00:43:07.442 "trsvcid": "4420", 00:43:07.442 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:07.442 "prchk_reftag": false, 00:43:07.442 "prchk_guard": false, 00:43:07.442 "ctrlr_loss_timeout_sec": 0, 00:43:07.442 "reconnect_delay_sec": 0, 00:43:07.442 "fast_io_fail_timeout_sec": 0, 00:43:07.442 "psk": "key0", 00:43:07.442 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:07.442 "hdgst": false, 00:43:07.442 "ddgst": false 00:43:07.442 } 00:43:07.442 }, 00:43:07.442 { 00:43:07.442 "method": "bdev_nvme_set_hotplug", 00:43:07.442 "params": { 00:43:07.442 "period_us": 100000, 00:43:07.442 "enable": false 00:43:07.442 } 00:43:07.442 }, 00:43:07.442 { 00:43:07.442 "method": "bdev_wait_for_examine" 00:43:07.442 } 00:43:07.442 ] 00:43:07.442 }, 00:43:07.442 { 00:43:07.442 "subsystem": "nbd", 00:43:07.442 "config": [] 00:43:07.442 } 00:43:07.442 ] 00:43:07.442 }' 00:43:07.442 11:38:04 keyring_file -- keyring/file.sh@115 -- # killprocess 2387853 00:43:07.442 11:38:04 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2387853 ']' 00:43:07.442 11:38:04 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2387853 00:43:07.442 11:38:04 keyring_file -- common/autotest_common.sh@955 -- # uname 00:43:07.442 11:38:04 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:07.442 11:38:04 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2387853 00:43:07.442 11:38:04 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:07.442 11:38:04 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:07.442 11:38:04 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2387853' 00:43:07.442 killing process with pid 2387853 00:43:07.442 11:38:04 keyring_file -- common/autotest_common.sh@969 -- # kill 2387853 00:43:07.442 Received shutdown signal, test time was about 1.000000 seconds 00:43:07.442 00:43:07.442 Latency(us) 00:43:07.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:07.442 =================================================================================================================== 00:43:07.442 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:07.442 11:38:04 keyring_file -- common/autotest_common.sh@974 -- # wait 2387853 00:43:07.703 11:38:05 keyring_file -- keyring/file.sh@118 -- # bperfpid=2389325 00:43:07.703 11:38:05 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2389325 /var/tmp/bperf.sock 00:43:07.703 11:38:05 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2389325 ']' 00:43:07.703 11:38:05 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:07.703 11:38:05 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:43:07.703 11:38:05 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:07.703 11:38:05 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:07.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:07.703 11:38:05 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:43:07.703 "subsystems": [ 00:43:07.703 { 00:43:07.703 "subsystem": "keyring", 00:43:07.703 "config": [ 00:43:07.703 { 00:43:07.703 "method": "keyring_file_add_key", 00:43:07.703 "params": { 00:43:07.703 "name": "key0", 00:43:07.703 "path": "/tmp/tmp.RvH911l8LE" 00:43:07.703 } 00:43:07.703 }, 00:43:07.703 { 00:43:07.703 "method": "keyring_file_add_key", 00:43:07.703 "params": { 00:43:07.703 "name": "key1", 00:43:07.703 "path": "/tmp/tmp.kERqngth26" 00:43:07.703 } 00:43:07.703 } 00:43:07.703 ] 00:43:07.703 }, 00:43:07.703 { 00:43:07.703 "subsystem": "iobuf", 00:43:07.703 "config": [ 00:43:07.703 { 00:43:07.703 "method": "iobuf_set_options", 00:43:07.703 "params": { 00:43:07.703 "small_pool_count": 8192, 00:43:07.703 "large_pool_count": 1024, 00:43:07.703 "small_bufsize": 8192, 00:43:07.703 "large_bufsize": 135168 00:43:07.703 } 00:43:07.703 } 00:43:07.703 ] 00:43:07.703 }, 00:43:07.703 { 00:43:07.703 "subsystem": "sock", 00:43:07.703 "config": [ 00:43:07.703 { 00:43:07.703 "method": "sock_set_default_impl", 00:43:07.703 "params": { 00:43:07.703 "impl_name": "posix" 00:43:07.703 } 00:43:07.703 }, 00:43:07.703 { 00:43:07.703 "method": "sock_impl_set_options", 00:43:07.703 "params": { 00:43:07.703 "impl_name": "ssl", 00:43:07.703 "recv_buf_size": 4096, 00:43:07.703 "send_buf_size": 4096, 00:43:07.703 "enable_recv_pipe": true, 00:43:07.703 "enable_quickack": false, 00:43:07.703 "enable_placement_id": 0, 00:43:07.703 "enable_zerocopy_send_server": true, 00:43:07.703 "enable_zerocopy_send_client": false, 00:43:07.703 "zerocopy_threshold": 0, 00:43:07.703 "tls_version": 0, 00:43:07.703 "enable_ktls": false 00:43:07.703 } 00:43:07.703 }, 00:43:07.703 { 00:43:07.703 "method": "sock_impl_set_options", 00:43:07.703 "params": { 00:43:07.703 "impl_name": "posix", 00:43:07.703 "recv_buf_size": 2097152, 00:43:07.703 "send_buf_size": 2097152, 00:43:07.703 "enable_recv_pipe": true, 00:43:07.703 "enable_quickack": false, 00:43:07.703 "enable_placement_id": 0, 00:43:07.703 "enable_zerocopy_send_server": true, 00:43:07.703 "enable_zerocopy_send_client": false, 00:43:07.703 "zerocopy_threshold": 0, 00:43:07.703 "tls_version": 0, 00:43:07.703 "enable_ktls": false 00:43:07.703 } 00:43:07.703 } 00:43:07.703 ] 00:43:07.703 }, 00:43:07.703 { 00:43:07.703 "subsystem": "vmd", 00:43:07.703 "config": [] 00:43:07.703 }, 00:43:07.703 { 00:43:07.703 "subsystem": "accel", 00:43:07.703 "config": [ 00:43:07.703 { 00:43:07.703 "method": "accel_set_options", 00:43:07.703 "params": { 00:43:07.703 "small_cache_size": 128, 00:43:07.703 "large_cache_size": 16, 00:43:07.703 "task_count": 2048, 00:43:07.703 "sequence_count": 2048, 00:43:07.703 "buf_count": 2048 00:43:07.703 } 00:43:07.703 } 00:43:07.703 ] 00:43:07.703 }, 00:43:07.703 { 00:43:07.703 "subsystem": "bdev", 00:43:07.703 "config": [ 00:43:07.703 { 00:43:07.703 "method": "bdev_set_options", 00:43:07.703 "params": { 00:43:07.703 "bdev_io_pool_size": 65535, 00:43:07.703 "bdev_io_cache_size": 256, 00:43:07.703 "bdev_auto_examine": true, 00:43:07.703 "iobuf_small_cache_size": 128, 00:43:07.703 "iobuf_large_cache_size": 16 00:43:07.703 } 00:43:07.703 }, 00:43:07.703 { 00:43:07.703 "method": "bdev_raid_set_options", 00:43:07.703 "params": { 00:43:07.703 "process_window_size_kb": 1024, 00:43:07.703 "process_max_bandwidth_mb_sec": 0 00:43:07.703 } 00:43:07.703 }, 00:43:07.703 { 00:43:07.703 "method": "bdev_iscsi_set_options", 00:43:07.703 "params": { 00:43:07.703 "timeout_sec": 30 00:43:07.703 } 00:43:07.703 }, 00:43:07.703 { 00:43:07.703 "method": "bdev_nvme_set_options", 00:43:07.703 "params": { 00:43:07.703 "action_on_timeout": "none", 00:43:07.703 "timeout_us": 0, 00:43:07.703 "timeout_admin_us": 0, 00:43:07.703 "keep_alive_timeout_ms": 10000, 00:43:07.703 "arbitration_burst": 0, 00:43:07.703 "low_priority_weight": 0, 00:43:07.703 "medium_priority_weight": 0, 00:43:07.703 "high_priority_weight": 0, 00:43:07.703 "nvme_adminq_poll_period_us": 10000, 00:43:07.703 "nvme_ioq_poll_period_us": 0, 00:43:07.703 "io_queue_requests": 512, 00:43:07.703 "delay_cmd_submit": true, 00:43:07.703 "transport_retry_count": 4, 00:43:07.703 "bdev_retry_count": 3, 00:43:07.703 "transport_ack_timeout": 0, 00:43:07.703 "ctrlr_loss_timeout_sec": 0, 00:43:07.703 "reconnect_delay_sec": 0, 00:43:07.703 "fast_io_fail_timeout_sec": 0, 00:43:07.703 "disable_auto_failback": false, 00:43:07.703 "generate_uuids": false, 00:43:07.703 "transport_tos": 0, 00:43:07.703 "nvme_error_stat": false, 00:43:07.703 "rdma_srq_size": 0, 00:43:07.703 "io_path_stat": false, 00:43:07.703 "allow_accel_sequence": false, 00:43:07.703 "rdma_max_cq_size": 0, 00:43:07.703 "rdma_cm_event_timeout_ms": 0, 00:43:07.703 "dhchap_digests": [ 00:43:07.703 "sha256", 00:43:07.703 "sha384", 00:43:07.703 "sha512" 00:43:07.703 ], 00:43:07.703 "dhchap_dhgroups": [ 00:43:07.703 "null", 00:43:07.703 "ffdhe2048", 00:43:07.703 "ffdhe3072", 00:43:07.703 "ffdhe4096", 00:43:07.703 "ffdhe6144", 00:43:07.703 "ffdhe8192" 00:43:07.703 ] 00:43:07.703 } 00:43:07.703 }, 00:43:07.703 { 00:43:07.703 "method": "bdev_nvme_attach_controller", 00:43:07.703 "params": { 00:43:07.703 "name": "nvme0", 00:43:07.703 "trtype": "TCP", 00:43:07.703 "adrfam": "IPv4", 00:43:07.703 "traddr": "127.0.0.1", 00:43:07.703 "trsvcid": "4420", 00:43:07.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:07.704 "prchk_reftag": false, 00:43:07.704 "prchk_guard": false, 00:43:07.704 "ctrlr_loss_timeout_sec": 0, 00:43:07.704 "reconnect_delay_sec": 0, 00:43:07.704 "fast_io_fail_timeout_sec": 0, 00:43:07.704 "psk": "key0", 00:43:07.704 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:07.704 "hdgst": false, 00:43:07.704 "ddgst": false 00:43:07.704 } 00:43:07.704 }, 00:43:07.704 { 00:43:07.704 "method": "bdev_nvme_set_hotplug", 00:43:07.704 "params": { 00:43:07.704 "period_us": 100000, 00:43:07.704 "enable": false 00:43:07.704 } 00:43:07.704 }, 00:43:07.704 { 00:43:07.704 "method": "bdev_wait_for_examine" 00:43:07.704 } 00:43:07.704 ] 00:43:07.704 }, 00:43:07.704 { 00:43:07.704 "subsystem": "nbd", 00:43:07.704 "config": [] 00:43:07.704 } 00:43:07.704 ] 00:43:07.704 }' 00:43:07.704 11:38:05 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:07.704 11:38:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:07.704 [2024-10-06 11:38:05.206652] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:43:07.704 [2024-10-06 11:38:05.206698] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2389325 ] 00:43:07.704 [2024-10-06 11:38:05.261654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:07.964 [2024-10-06 11:38:05.302113] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:43:07.964 [2024-10-06 11:38:05.456031] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:08.532 11:38:06 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:08.532 11:38:06 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:43:08.532 11:38:06 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:43:08.532 11:38:06 keyring_file -- keyring/file.sh@121 -- # jq length 00:43:08.532 11:38:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:08.790 11:38:06 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:43:08.790 11:38:06 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:43:08.790 11:38:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:08.790 11:38:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:08.790 11:38:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:08.790 11:38:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:08.790 11:38:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:09.050 11:38:06 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:43:09.050 11:38:06 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:43:09.050 11:38:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:09.050 11:38:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:09.050 11:38:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:09.050 11:38:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:09.050 11:38:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:09.309 11:38:06 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:43:09.309 11:38:06 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:43:09.309 11:38:06 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:43:09.309 11:38:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:43:09.309 11:38:06 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:43:09.309 11:38:06 keyring_file -- keyring/file.sh@1 -- # cleanup 00:43:09.309 11:38:06 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.RvH911l8LE /tmp/tmp.kERqngth26 00:43:09.309 11:38:06 keyring_file -- keyring/file.sh@20 -- # killprocess 2389325 00:43:09.309 11:38:06 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2389325 ']' 00:43:09.309 11:38:06 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2389325 00:43:09.309 11:38:06 keyring_file -- common/autotest_common.sh@955 -- # uname 00:43:09.309 11:38:06 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:09.309 11:38:06 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2389325 00:43:09.568 11:38:06 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:09.568 11:38:06 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:09.568 11:38:06 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2389325' 00:43:09.568 killing process with pid 2389325 00:43:09.568 11:38:06 keyring_file -- common/autotest_common.sh@969 -- # kill 2389325 00:43:09.568 Received shutdown signal, test time was about 1.000000 seconds 00:43:09.568 00:43:09.568 Latency(us) 00:43:09.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:09.568 =================================================================================================================== 00:43:09.568 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:09.568 11:38:06 keyring_file -- common/autotest_common.sh@974 -- # wait 2389325 00:43:09.568 11:38:07 keyring_file -- keyring/file.sh@21 -- # killprocess 2387843 00:43:09.568 11:38:07 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2387843 ']' 00:43:09.568 11:38:07 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2387843 00:43:09.568 11:38:07 keyring_file -- common/autotest_common.sh@955 -- # uname 00:43:09.568 11:38:07 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:09.568 11:38:07 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2387843 00:43:09.568 11:38:07 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:09.568 11:38:07 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:09.568 11:38:07 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2387843' 00:43:09.568 killing process with pid 2387843 00:43:09.568 11:38:07 keyring_file -- common/autotest_common.sh@969 -- # kill 2387843 00:43:09.568 11:38:07 keyring_file -- common/autotest_common.sh@974 -- # wait 2387843 00:43:10.137 00:43:10.137 real 0m11.567s 00:43:10.137 user 0m28.071s 00:43:10.137 sys 0m2.871s 00:43:10.137 11:38:07 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:10.137 11:38:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:10.137 ************************************ 00:43:10.137 END TEST keyring_file 00:43:10.137 ************************************ 00:43:10.137 11:38:07 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:43:10.137 11:38:07 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:10.137 11:38:07 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:43:10.137 11:38:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:10.137 11:38:07 -- common/autotest_common.sh@10 -- # set +x 00:43:10.137 ************************************ 00:43:10.137 START TEST keyring_linux 00:43:10.137 ************************************ 00:43:10.137 11:38:07 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:10.137 Joined session keyring: 475404832 00:43:10.137 * Looking for test storage... 00:43:10.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:10.137 11:38:07 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:10.137 11:38:07 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:43:10.137 11:38:07 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:10.137 11:38:07 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@345 -- # : 1 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:10.137 11:38:07 keyring_linux -- scripts/common.sh@368 -- # return 0 00:43:10.137 11:38:07 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:10.137 11:38:07 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:10.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:10.137 --rc genhtml_branch_coverage=1 00:43:10.138 --rc genhtml_function_coverage=1 00:43:10.138 --rc genhtml_legend=1 00:43:10.138 --rc geninfo_all_blocks=1 00:43:10.138 --rc geninfo_unexecuted_blocks=1 00:43:10.138 00:43:10.138 ' 00:43:10.138 11:38:07 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:10.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:10.138 --rc genhtml_branch_coverage=1 00:43:10.138 --rc genhtml_function_coverage=1 00:43:10.138 --rc genhtml_legend=1 00:43:10.138 --rc geninfo_all_blocks=1 00:43:10.138 --rc geninfo_unexecuted_blocks=1 00:43:10.138 00:43:10.138 ' 00:43:10.138 11:38:07 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:10.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:10.138 --rc genhtml_branch_coverage=1 00:43:10.138 --rc genhtml_function_coverage=1 00:43:10.138 --rc genhtml_legend=1 00:43:10.138 --rc geninfo_all_blocks=1 00:43:10.138 --rc geninfo_unexecuted_blocks=1 00:43:10.138 00:43:10.138 ' 00:43:10.138 11:38:07 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:10.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:10.138 --rc genhtml_branch_coverage=1 00:43:10.138 --rc genhtml_function_coverage=1 00:43:10.138 --rc genhtml_legend=1 00:43:10.138 --rc geninfo_all_blocks=1 00:43:10.138 --rc geninfo_unexecuted_blocks=1 00:43:10.138 00:43:10.138 ' 00:43:10.138 11:38:07 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:10.138 11:38:07 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:10.138 11:38:07 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:43:10.138 11:38:07 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:10.138 11:38:07 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:10.138 11:38:07 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:10.138 11:38:07 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:10.138 11:38:07 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:10.138 11:38:07 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:10.138 11:38:07 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:10.138 11:38:07 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:10.138 11:38:07 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:10.138 11:38:07 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:10.398 11:38:07 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:43:10.398 11:38:07 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:10.398 11:38:07 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:10.398 11:38:07 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:10.398 11:38:07 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:10.398 11:38:07 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:10.398 11:38:07 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:10.398 11:38:07 keyring_linux -- paths/export.sh@5 -- # export PATH 00:43:10.398 11:38:07 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:10.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:10.398 11:38:07 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:10.398 11:38:07 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:10.398 11:38:07 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:10.398 11:38:07 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:43:10.398 11:38:07 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:43:10.398 11:38:07 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:43:10.398 11:38:07 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:43:10.398 11:38:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:10.398 11:38:07 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:43:10.398 11:38:07 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:10.398 11:38:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:10.398 11:38:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:43:10.398 11:38:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@731 -- # python - 00:43:10.398 11:38:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:43:10.398 11:38:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:43:10.398 /tmp/:spdk-test:key0 00:43:10.398 11:38:07 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:43:10.398 11:38:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:10.398 11:38:07 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:43:10.398 11:38:07 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:10.398 11:38:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:10.398 11:38:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:43:10.398 11:38:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:43:10.398 11:38:07 keyring_linux -- nvmf/common.sh@731 -- # python - 00:43:10.398 11:38:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:43:10.398 11:38:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:43:10.398 /tmp/:spdk-test:key1 00:43:10.398 11:38:07 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2389863 00:43:10.398 11:38:07 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2389863 00:43:10.398 11:38:07 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:10.398 11:38:07 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2389863 ']' 00:43:10.398 11:38:07 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:10.398 11:38:07 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:10.398 11:38:07 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:10.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:10.398 11:38:07 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:10.398 11:38:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:10.398 [2024-10-06 11:38:07.852172] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:43:10.398 [2024-10-06 11:38:07.852224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2389863 ] 00:43:10.399 [2024-10-06 11:38:07.904918] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:10.399 [2024-10-06 11:38:07.944424] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:43:10.658 11:38:08 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:10.658 11:38:08 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:43:10.658 11:38:08 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:43:10.658 11:38:08 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:10.658 11:38:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:10.658 [2024-10-06 11:38:08.142779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:10.658 null0 00:43:10.658 [2024-10-06 11:38:08.174833] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:10.658 [2024-10-06 11:38:08.175129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:10.658 11:38:08 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:10.658 11:38:08 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:43:10.658 2528268 00:43:10.658 11:38:08 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:43:10.658 549521152 00:43:10.658 11:38:08 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2389877 00:43:10.658 11:38:08 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2389877 /var/tmp/bperf.sock 00:43:10.658 11:38:08 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:43:10.658 11:38:08 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2389877 ']' 00:43:10.658 11:38:08 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:10.658 11:38:08 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:10.658 11:38:08 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:10.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:10.658 11:38:08 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:10.658 11:38:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:10.917 [2024-10-06 11:38:08.248262] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 22.11.4 initialization... 00:43:10.917 [2024-10-06 11:38:08.248304] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2389877 ] 00:43:10.917 [2024-10-06 11:38:08.300955] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:10.917 [2024-10-06 11:38:08.339311] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:43:10.917 11:38:08 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:10.917 11:38:08 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:43:10.917 11:38:08 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:43:10.917 11:38:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:43:11.174 11:38:08 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:43:11.175 11:38:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:11.433 11:38:08 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:11.433 11:38:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:11.692 [2024-10-06 11:38:09.013626] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:11.692 nvme0n1 00:43:11.692 11:38:09 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:43:11.692 11:38:09 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:43:11.692 11:38:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:11.692 11:38:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:11.692 11:38:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:11.692 11:38:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:11.951 11:38:09 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:43:11.951 11:38:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:11.951 11:38:09 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:43:11.951 11:38:09 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:43:11.951 11:38:09 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:11.951 11:38:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:11.951 11:38:09 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:43:11.951 11:38:09 keyring_linux -- keyring/linux.sh@25 -- # sn=2528268 00:43:11.951 11:38:09 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:43:11.951 11:38:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:11.951 11:38:09 keyring_linux -- keyring/linux.sh@26 -- # [[ 2528268 == \2\5\2\8\2\6\8 ]] 00:43:11.951 11:38:09 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 2528268 00:43:11.951 11:38:09 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:43:11.951 11:38:09 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:12.210 Running I/O for 1 seconds... 00:43:13.147 15060.00 IOPS, 58.83 MiB/s 00:43:13.147 Latency(us) 00:43:13.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:13.147 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:13.147 nvme0n1 : 1.01 15081.02 58.91 0.00 0.00 8457.72 2387.38 11234.74 00:43:13.147 =================================================================================================================== 00:43:13.147 Total : 15081.02 58.91 0.00 0.00 8457.72 2387.38 11234.74 00:43:13.147 { 00:43:13.147 "results": [ 00:43:13.147 { 00:43:13.147 "job": "nvme0n1", 00:43:13.147 "core_mask": "0x2", 00:43:13.147 "workload": "randread", 00:43:13.147 "status": "finished", 00:43:13.147 "queue_depth": 128, 00:43:13.147 "io_size": 4096, 00:43:13.147 "runtime": 1.00716, 00:43:13.147 "iops": 15081.019897533659, 00:43:13.147 "mibps": 58.910233974740855, 00:43:13.147 "io_failed": 0, 00:43:13.147 "io_timeout": 0, 00:43:13.147 "avg_latency_us": 8457.722428449159, 00:43:13.147 "min_latency_us": 2387.382857142857, 00:43:13.147 "max_latency_us": 11234.742857142857 00:43:13.147 } 00:43:13.147 ], 00:43:13.147 "core_count": 1 00:43:13.147 } 00:43:13.147 11:38:10 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:13.147 11:38:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:13.415 11:38:10 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:43:13.415 11:38:10 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:43:13.415 11:38:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:13.415 11:38:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:13.415 11:38:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:13.415 11:38:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:13.673 11:38:11 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:43:13.673 11:38:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:13.673 11:38:11 keyring_linux -- keyring/linux.sh@23 -- # return 00:43:13.673 11:38:11 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:13.673 11:38:11 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:43:13.673 11:38:11 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:13.673 11:38:11 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:43:13.673 11:38:11 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:13.673 11:38:11 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:43:13.673 11:38:11 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:13.673 11:38:11 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:13.673 11:38:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:13.673 [2024-10-06 11:38:11.174992] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:13.673 [2024-10-06 11:38:11.175271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184e9f0 (107): Transport endpoint is not connected 00:43:13.673 [2024-10-06 11:38:11.176267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184e9f0 (9): Bad file descriptor 00:43:13.673 [2024-10-06 11:38:11.177268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:13.673 [2024-10-06 11:38:11.177278] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:13.673 [2024-10-06 11:38:11.177290] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:13.673 [2024-10-06 11:38:11.177299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:13.673 request: 00:43:13.673 { 00:43:13.673 "name": "nvme0", 00:43:13.673 "trtype": "tcp", 00:43:13.673 "traddr": "127.0.0.1", 00:43:13.673 "adrfam": "ipv4", 00:43:13.673 "trsvcid": "4420", 00:43:13.673 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:13.673 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:13.673 "prchk_reftag": false, 00:43:13.673 "prchk_guard": false, 00:43:13.673 "hdgst": false, 00:43:13.673 "ddgst": false, 00:43:13.673 "psk": ":spdk-test:key1", 00:43:13.673 "allow_unrecognized_csi": false, 00:43:13.673 "method": "bdev_nvme_attach_controller", 00:43:13.673 "req_id": 1 00:43:13.673 } 00:43:13.673 Got JSON-RPC error response 00:43:13.673 response: 00:43:13.673 { 00:43:13.673 "code": -5, 00:43:13.673 "message": "Input/output error" 00:43:13.673 } 00:43:13.673 11:38:11 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:43:13.673 11:38:11 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:13.673 11:38:11 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:13.673 11:38:11 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:13.673 11:38:11 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:43:13.673 11:38:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:13.673 11:38:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:43:13.673 11:38:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:43:13.673 11:38:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:43:13.673 11:38:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:13.674 11:38:11 keyring_linux -- keyring/linux.sh@33 -- # sn=2528268 00:43:13.674 11:38:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 2528268 00:43:13.674 1 links removed 00:43:13.674 11:38:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:13.674 11:38:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:43:13.674 11:38:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:43:13.674 11:38:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:43:13.674 11:38:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:43:13.674 11:38:11 keyring_linux -- keyring/linux.sh@33 -- # sn=549521152 00:43:13.674 11:38:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 549521152 00:43:13.674 1 links removed 00:43:13.674 11:38:11 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2389877 00:43:13.674 11:38:11 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2389877 ']' 00:43:13.674 11:38:11 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2389877 00:43:13.674 11:38:11 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:43:13.674 11:38:11 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:13.674 11:38:11 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2389877 00:43:13.933 11:38:11 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:13.933 11:38:11 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:13.933 11:38:11 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2389877' 00:43:13.933 killing process with pid 2389877 00:43:13.933 11:38:11 keyring_linux -- common/autotest_common.sh@969 -- # kill 2389877 00:43:13.933 Received shutdown signal, test time was about 1.000000 seconds 00:43:13.933 00:43:13.933 Latency(us) 00:43:13.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:13.933 =================================================================================================================== 00:43:13.933 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:13.933 11:38:11 keyring_linux -- common/autotest_common.sh@974 -- # wait 2389877 00:43:13.933 11:38:11 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2389863 00:43:13.933 11:38:11 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2389863 ']' 00:43:13.933 11:38:11 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2389863 00:43:13.933 11:38:11 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:43:13.933 11:38:11 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:13.933 11:38:11 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2389863 00:43:13.933 11:38:11 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:13.933 11:38:11 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:13.933 11:38:11 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2389863' 00:43:13.933 killing process with pid 2389863 00:43:13.933 11:38:11 keyring_linux -- common/autotest_common.sh@969 -- # kill 2389863 00:43:13.933 11:38:11 keyring_linux -- common/autotest_common.sh@974 -- # wait 2389863 00:43:14.593 00:43:14.593 real 0m4.269s 00:43:14.593 user 0m7.556s 00:43:14.593 sys 0m1.478s 00:43:14.593 11:38:11 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:14.593 11:38:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:14.593 ************************************ 00:43:14.593 END TEST keyring_linux 00:43:14.593 ************************************ 00:43:14.593 11:38:11 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:43:14.593 11:38:11 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:43:14.593 11:38:11 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:43:14.593 11:38:11 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:43:14.593 11:38:11 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:43:14.593 11:38:11 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:43:14.593 11:38:11 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:43:14.593 11:38:11 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:43:14.593 11:38:11 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:43:14.593 11:38:11 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:43:14.593 11:38:11 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:43:14.593 11:38:11 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:43:14.593 11:38:11 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:43:14.593 11:38:11 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:43:14.593 11:38:11 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:43:14.593 11:38:11 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:43:14.593 11:38:11 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:43:14.593 11:38:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:14.593 11:38:11 -- common/autotest_common.sh@10 -- # set +x 00:43:14.593 11:38:11 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:43:14.593 11:38:11 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:43:14.593 11:38:11 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:43:14.593 11:38:11 -- common/autotest_common.sh@10 -- # set +x 00:43:19.870 INFO: APP EXITING 00:43:19.870 INFO: killing all VMs 00:43:19.870 INFO: killing vhost app 00:43:19.870 INFO: EXIT DONE 00:43:21.773 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:43:21.773 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:43:21.773 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:43:21.773 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:43:21.773 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:43:21.773 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:43:21.773 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:43:21.773 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:43:21.773 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:43:21.773 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:43:21.773 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:43:22.032 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:43:22.032 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:43:22.032 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:43:22.032 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:43:22.032 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:43:22.032 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:43:24.566 Cleaning 00:43:24.566 Removing: /var/run/dpdk/spdk0/config 00:43:24.566 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:24.566 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:24.566 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:24.566 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:24.566 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:24.566 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:24.566 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:24.566 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:24.566 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:24.566 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:24.566 Removing: /var/run/dpdk/spdk1/config 00:43:24.566 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:24.566 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:24.566 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:24.566 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:24.566 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:24.566 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:24.566 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:24.566 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:24.566 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:24.566 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:24.566 Removing: /var/run/dpdk/spdk2/config 00:43:24.566 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:24.566 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:24.566 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:24.566 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:24.566 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:24.566 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:24.566 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:24.566 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:24.566 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:24.566 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:24.566 Removing: /var/run/dpdk/spdk3/config 00:43:24.566 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:24.566 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:24.566 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:24.566 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:24.566 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:24.566 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:24.566 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:24.566 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:24.566 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:24.566 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:24.566 Removing: /var/run/dpdk/spdk4/config 00:43:24.566 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:24.566 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:24.566 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:24.566 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:24.566 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:24.566 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:24.566 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:24.566 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:24.566 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:24.566 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:24.566 Removing: /dev/shm/bdev_svc_trace.1 00:43:24.566 Removing: /dev/shm/nvmf_trace.0 00:43:24.566 Removing: /dev/shm/spdk_tgt_trace.pid1843743 00:43:24.566 Removing: /var/run/dpdk/spdk0 00:43:24.566 Removing: /var/run/dpdk/spdk1 00:43:24.566 Removing: /var/run/dpdk/spdk2 00:43:24.825 Removing: /var/run/dpdk/spdk3 00:43:24.825 Removing: /var/run/dpdk/spdk4 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1841668 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1842695 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1843743 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1844366 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1845294 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1845330 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1846389 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1846476 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1846824 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1848359 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1849687 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1850527 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1850712 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1850937 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1851225 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1851469 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1851709 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1851988 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1852725 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1855690 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1855909 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1856144 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1856153 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1856627 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1856640 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1857116 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1857245 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1857590 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1857601 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1857851 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1857861 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1858406 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1858659 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1858947 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1862590 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1866773 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1876566 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1877238 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1881429 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1881679 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1885861 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1891620 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1894398 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1904897 00:43:24.825 Removing: /var/run/dpdk/spdk_pid1913649 00:43:24.826 Removing: /var/run/dpdk/spdk_pid1915433 00:43:24.826 Removing: /var/run/dpdk/spdk_pid1916334 00:43:24.826 Removing: /var/run/dpdk/spdk_pid1932800 00:43:24.826 Removing: /var/run/dpdk/spdk_pid1936643 00:43:24.826 Removing: /var/run/dpdk/spdk_pid2017645 00:43:24.826 Removing: /var/run/dpdk/spdk_pid2022997 00:43:24.826 Removing: /var/run/dpdk/spdk_pid2029045 00:43:24.826 Removing: /var/run/dpdk/spdk_pid2034929 00:43:24.826 Removing: /var/run/dpdk/spdk_pid2034931 00:43:24.826 Removing: /var/run/dpdk/spdk_pid2035795 00:43:24.826 Removing: /var/run/dpdk/spdk_pid2036517 00:43:24.826 Removing: /var/run/dpdk/spdk_pid2037388 00:43:24.826 Removing: /var/run/dpdk/spdk_pid2038030 00:43:24.826 Removing: /var/run/dpdk/spdk_pid2038060 00:43:24.826 Removing: /var/run/dpdk/spdk_pid2038282 00:43:24.826 Removing: /var/run/dpdk/spdk_pid2038298 00:43:24.826 Removing: /var/run/dpdk/spdk_pid2038397 00:43:24.826 Removing: /var/run/dpdk/spdk_pid2039187 00:43:24.826 Removing: /var/run/dpdk/spdk_pid2040074 00:43:24.826 Removing: /var/run/dpdk/spdk_pid2040968 00:43:24.826 Removing: /var/run/dpdk/spdk_pid2041432 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2041579 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2041858 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2042857 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2043831 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2051719 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2079884 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2084083 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2085837 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2087539 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2087659 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2087886 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2087904 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2088392 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2090175 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2090933 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2091404 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2093611 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2093964 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2094761 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2099121 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2104385 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2104386 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2104387 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2108087 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2111790 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2116672 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2151676 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2155501 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2161371 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2162638 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2164012 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2165316 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2169881 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2173834 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2180953 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2181073 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2186015 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2186243 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2186464 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2186805 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2186919 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2188270 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2189827 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2191383 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2192986 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2194704 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2196265 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2201994 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2202550 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2204251 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2205266 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2210857 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2213541 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2218791 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2224350 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2232709 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2239548 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2239555 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2257543 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2258013 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2258669 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2259129 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2259858 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2260324 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2260862 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2261451 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2265414 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2265639 00:43:25.085 Removing: /var/run/dpdk/spdk_pid2272086 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2272135 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2277287 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2281430 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2290893 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2291384 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2295552 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2295799 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2299745 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2305252 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2307772 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2318010 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2326534 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2328089 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2328984 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2344575 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2348304 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2350931 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2358261 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2358266 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2363238 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2365418 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2367328 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2368370 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2370301 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2371518 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2379868 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2380315 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2380771 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2382991 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2383532 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2384068 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2387843 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2387853 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2389325 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2389863 00:43:25.345 Removing: /var/run/dpdk/spdk_pid2389877 00:43:25.345 Clean 00:43:25.345 11:38:22 -- common/autotest_common.sh@1451 -- # return 0 00:43:25.345 11:38:22 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:43:25.345 11:38:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:25.345 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:43:25.345 11:38:22 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:43:25.345 11:38:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:25.345 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:43:25.604 11:38:22 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:25.604 11:38:22 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:25.604 11:38:22 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:25.604 11:38:22 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:43:25.604 11:38:22 -- spdk/autotest.sh@394 -- # hostname 00:43:25.604 11:38:22 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:25.604 geninfo: WARNING: invalid characters removed from testname! 00:43:47.558 11:38:43 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:48.496 11:38:45 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:50.402 11:38:47 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:51.779 11:38:49 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:53.686 11:38:51 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:55.594 11:38:53 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:57.498 11:38:54 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:57.498 11:38:54 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:43:57.498 11:38:54 -- common/autotest_common.sh@1681 -- $ lcov --version 00:43:57.498 11:38:54 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:43:57.499 11:38:54 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:43:57.499 11:38:54 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:43:57.499 11:38:54 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:43:57.499 11:38:54 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:43:57.499 11:38:54 -- scripts/common.sh@336 -- $ IFS=.-: 00:43:57.499 11:38:54 -- scripts/common.sh@336 -- $ read -ra ver1 00:43:57.499 11:38:54 -- scripts/common.sh@337 -- $ IFS=.-: 00:43:57.499 11:38:54 -- scripts/common.sh@337 -- $ read -ra ver2 00:43:57.499 11:38:54 -- scripts/common.sh@338 -- $ local 'op=<' 00:43:57.499 11:38:54 -- scripts/common.sh@340 -- $ ver1_l=2 00:43:57.499 11:38:54 -- scripts/common.sh@341 -- $ ver2_l=1 00:43:57.499 11:38:54 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:43:57.499 11:38:54 -- scripts/common.sh@344 -- $ case "$op" in 00:43:57.499 11:38:54 -- scripts/common.sh@345 -- $ : 1 00:43:57.499 11:38:54 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:43:57.499 11:38:54 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:57.499 11:38:54 -- scripts/common.sh@365 -- $ decimal 1 00:43:57.499 11:38:54 -- scripts/common.sh@353 -- $ local d=1 00:43:57.499 11:38:54 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:43:57.499 11:38:54 -- scripts/common.sh@355 -- $ echo 1 00:43:57.499 11:38:54 -- scripts/common.sh@365 -- $ ver1[v]=1 00:43:57.499 11:38:54 -- scripts/common.sh@366 -- $ decimal 2 00:43:57.499 11:38:54 -- scripts/common.sh@353 -- $ local d=2 00:43:57.499 11:38:54 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:43:57.499 11:38:54 -- scripts/common.sh@355 -- $ echo 2 00:43:57.499 11:38:54 -- scripts/common.sh@366 -- $ ver2[v]=2 00:43:57.499 11:38:54 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:43:57.499 11:38:54 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:43:57.499 11:38:54 -- scripts/common.sh@368 -- $ return 0 00:43:57.499 11:38:54 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:57.499 11:38:54 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:43:57.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:57.499 --rc genhtml_branch_coverage=1 00:43:57.499 --rc genhtml_function_coverage=1 00:43:57.499 --rc genhtml_legend=1 00:43:57.499 --rc geninfo_all_blocks=1 00:43:57.499 --rc geninfo_unexecuted_blocks=1 00:43:57.499 00:43:57.499 ' 00:43:57.499 11:38:54 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:43:57.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:57.499 --rc genhtml_branch_coverage=1 00:43:57.499 --rc genhtml_function_coverage=1 00:43:57.499 --rc genhtml_legend=1 00:43:57.499 --rc geninfo_all_blocks=1 00:43:57.499 --rc geninfo_unexecuted_blocks=1 00:43:57.499 00:43:57.499 ' 00:43:57.499 11:38:54 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:43:57.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:57.499 --rc genhtml_branch_coverage=1 00:43:57.499 --rc genhtml_function_coverage=1 00:43:57.499 --rc genhtml_legend=1 00:43:57.499 --rc geninfo_all_blocks=1 00:43:57.499 --rc geninfo_unexecuted_blocks=1 00:43:57.499 00:43:57.499 ' 00:43:57.499 11:38:54 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:43:57.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:57.499 --rc genhtml_branch_coverage=1 00:43:57.499 --rc genhtml_function_coverage=1 00:43:57.499 --rc genhtml_legend=1 00:43:57.499 --rc geninfo_all_blocks=1 00:43:57.499 --rc geninfo_unexecuted_blocks=1 00:43:57.499 00:43:57.499 ' 00:43:57.499 11:38:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:57.499 11:38:54 -- scripts/common.sh@15 -- $ shopt -s extglob 00:43:57.499 11:38:54 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:43:57.499 11:38:54 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:57.499 11:38:54 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:57.499 11:38:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:57.499 11:38:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:57.499 11:38:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:57.499 11:38:54 -- paths/export.sh@5 -- $ export PATH 00:43:57.499 11:38:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:57.499 11:38:54 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:43:57.499 11:38:54 -- common/autobuild_common.sh@486 -- $ date +%s 00:43:57.499 11:38:54 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728207534.XXXXXX 00:43:57.499 11:38:54 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728207534.weiNZx 00:43:57.499 11:38:54 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:43:57.499 11:38:54 -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']' 00:43:57.499 11:38:54 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:43:57.499 11:38:54 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:43:57.499 11:38:54 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:43:57.499 11:38:54 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:43:57.499 11:38:54 -- common/autobuild_common.sh@502 -- $ get_config_params 00:43:57.499 11:38:54 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:43:57.499 11:38:54 -- common/autotest_common.sh@10 -- $ set +x 00:43:57.499 11:38:55 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:43:57.499 11:38:55 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:43:57.499 11:38:55 -- pm/common@17 -- $ local monitor 00:43:57.499 11:38:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:57.499 11:38:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:57.499 11:38:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:57.499 11:38:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:57.499 11:38:55 -- pm/common@25 -- $ sleep 1 00:43:57.499 11:38:55 -- pm/common@21 -- $ date +%s 00:43:57.499 11:38:55 -- pm/common@21 -- $ date +%s 00:43:57.499 11:38:55 -- pm/common@21 -- $ date +%s 00:43:57.499 11:38:55 -- pm/common@21 -- $ date +%s 00:43:57.499 11:38:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728207535 00:43:57.499 11:38:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728207535 00:43:57.499 11:38:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728207535 00:43:57.499 11:38:55 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728207535 00:43:57.499 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728207535_collect-cpu-load.pm.log 00:43:57.499 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728207535_collect-cpu-temp.pm.log 00:43:57.499 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728207535_collect-vmstat.pm.log 00:43:57.499 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728207535_collect-bmc-pm.bmc.pm.log 00:43:58.438 11:38:56 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:43:58.438 11:38:56 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:43:58.438 11:38:56 -- spdk/autopackage.sh@14 -- $ timing_finish 00:43:58.438 11:38:56 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:58.438 11:38:56 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:58.438 11:38:56 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:58.698 11:38:56 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:43:58.698 11:38:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:43:58.698 11:38:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:43:58.698 11:38:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:58.698 11:38:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:43:58.698 11:38:56 -- pm/common@44 -- $ pid=2401211 00:43:58.698 11:38:56 -- pm/common@50 -- $ kill -TERM 2401211 00:43:58.698 11:38:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:58.698 11:38:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:43:58.698 11:38:56 -- pm/common@44 -- $ pid=2401212 00:43:58.698 11:38:56 -- pm/common@50 -- $ kill -TERM 2401212 00:43:58.698 11:38:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:58.698 11:38:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:43:58.698 11:38:56 -- pm/common@44 -- $ pid=2401214 00:43:58.698 11:38:56 -- pm/common@50 -- $ kill -TERM 2401214 00:43:58.698 11:38:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:58.698 11:38:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:43:58.698 11:38:56 -- pm/common@44 -- $ pid=2401241 00:43:58.698 11:38:56 -- pm/common@50 -- $ sudo -E kill -TERM 2401241 00:43:58.698 + [[ -n 1750226 ]] 00:43:58.698 + sudo kill 1750226 00:43:58.708 [Pipeline] } 00:43:58.724 [Pipeline] // stage 00:43:58.729 [Pipeline] } 00:43:58.743 [Pipeline] // timeout 00:43:58.749 [Pipeline] } 00:43:58.763 [Pipeline] // catchError 00:43:58.768 [Pipeline] } 00:43:58.784 [Pipeline] // wrap 00:43:58.790 [Pipeline] } 00:43:58.803 [Pipeline] // catchError 00:43:58.812 [Pipeline] stage 00:43:58.815 [Pipeline] { (Epilogue) 00:43:58.828 [Pipeline] catchError 00:43:58.830 [Pipeline] { 00:43:58.843 [Pipeline] echo 00:43:58.845 Cleanup processes 00:43:58.850 [Pipeline] sh 00:43:59.136 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:59.136 2401377 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:43:59.136 2401706 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:59.150 [Pipeline] sh 00:43:59.434 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:59.434 ++ grep -v 'sudo pgrep' 00:43:59.434 ++ awk '{print $1}' 00:43:59.434 + sudo kill -9 2401377 00:43:59.445 [Pipeline] sh 00:43:59.728 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:11.970 [Pipeline] sh 00:44:12.253 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:12.253 Artifacts sizes are good 00:44:12.264 [Pipeline] archiveArtifacts 00:44:12.269 Archiving artifacts 00:44:12.492 [Pipeline] sh 00:44:12.816 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:12.830 [Pipeline] cleanWs 00:44:12.839 [WS-CLEANUP] Deleting project workspace... 00:44:12.839 [WS-CLEANUP] Deferred wipeout is used... 00:44:12.845 [WS-CLEANUP] done 00:44:12.847 [Pipeline] } 00:44:12.862 [Pipeline] // catchError 00:44:12.872 [Pipeline] sh 00:44:13.152 + logger -p user.info -t JENKINS-CI 00:44:13.160 [Pipeline] } 00:44:13.173 [Pipeline] // stage 00:44:13.179 [Pipeline] } 00:44:13.192 [Pipeline] // node 00:44:13.197 [Pipeline] End of Pipeline 00:44:13.228 Finished: SUCCESS